url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.varsitytutors.com/gre_subject_test_math-help/infinite-sequences
# GRE Subject Test: Math : Infinite Sequences ## Example Questions ### Example Question #2 : Sequences & Series Which of the following are not infinite sequences? Explanation: Step 1: Define what an infinite sequence is... An infinite sequence is a sequence that is non-terminating. Step 2: Determine if each sequence above is infinite... For , the sequence is always infinite because the set of factorials is infinite. Also, the set of values by raising two factorial powers together is also infinite, it never has an ending term. For , this sequence is FINITE! For , this sequence is FINITE!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951685070991516, "perplexity": 1508.5994584628324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00211.warc.gz"}
https://astarmathsandphysics.com/a-level-physics-notes/electricity/2590-the-double-beam-tube.html?tmpl=component&print=1&page=
## The Double Beam Tube The double beam tube consists of a set of Helmholtz coils surrounding a bulb of helium gas at low pressure. Inside the bulb there are two electron guns. One emits electrons straight into the glass bulb (the radial electron gun), and the other shoots electrons tangentially to the coil. Using the radial electron gun, the range of the electrons is show to be proportional to the accelerating voltage and the energy of the electrons. When a current is passed through the Helmholtz coils, the magnetic field generated acts on the electron. The tangential beam can be bent into a circle. For a field of flux densityfor the electrons to travel in a circle of radiuswe have(1) The kinetic energy of the electron is equal toand this is equal to the potential energy gained,whereis the voltage difference between anode and cathode, so(2) From (2) cancelfrom both sides to givethen
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552136421203613, "perplexity": 804.4173985508844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00392.warc.gz"}
https://dsp.stackexchange.com/questions/61257/is-maximizing-hamming-distance-the-same-as-minimizing-correlation/61264#61264
# Is maximizing hamming distance the same as minimizing correlation? For the design of an error correcting code, I might wish to maximize the distance between the codewords $$\sum a_i \oplus b_i$$. For spreading sequences I'd like to minimize the cross-correlation at $$t=0$$ of $$\sum \sigma(a_i)\sigma(b_i)$$ where $$\sigma(\cdot)$$ maps binary $$0\to1$$ and $$1 \to (-1)$$ (i.e. what you'd get out of a BPSK matched filter when $$E_b$$ is normalized and there is no noise). Are these criteria the same? It seems to me that since $$\sigma$$ acts as an isomorphism that turns cyclic$$\pmod 2$$ addition into cyclic complex multiplication, they are equivalent statements. Can you verify this and let me know where I can read more about it? • hey, what's "minimally correlated" in terms of correlation coefficient to you? Zero? -1? Oct 14 '19 at 20:43 • I did mean the numerical minimum, so I thought of large (magnitude) negative values to be minimal for the purpose of the comparison with the hamming distance. I graphed both functions for a bunch of vectors and noticed that one measurement is a linear transformation of the other. Oct 14 '19 at 20:54 • ah, ok, yeah, in that case, your approach is right: since by normalization no value below -1 can occur, the vector that is -1 the other is the one with the minimum cross-correlation (in your definition). Whether that definition is useful is a different question, though! Oct 14 '19 at 21:21 ## 2 Answers Well, as you can easily verify, these two criteria aren't the same if you define "minimum correlation" to mean that the absolute value of the correlation coefficient is minimized (i.e. 0): • In $$\mathbb F_2^N$$, the vector that's the farthest away from any given vector $$v$$ is its bit-wise inverse $$\overline v$$ (using Hamming distance) • Using your mapping, the BPSK symbol vector mapping to $$\overline v$$ is the $$-1$$ times the one mapping to $$v$$: $$\sigma(v) =-\sigma(\overline v)$$. That's maximal correlation, not minimal. In short, the hamming distance defined over your finite field isn't a compatible norm to the $$\mathcal L_2$$ norm that gives a correlation coefficient over fields over the real or complex numbers, given your $$\sigma$$. There's not much to say here – they are simply not the same. If you, however, define the minimum correlation to be found when the correlation coefficient takes the smallest possible value (not: absolute value), then yes, due to normalization, the minimum possible correlation coefficient is -1, and as shown above, your method is one to find the matching BPSK representation. • Marcus, I also suspect that they're not the same, but I'm not sure about your example. Let's say a codeword is $1,1,1$, then the codeword at largest distance is $-1,-1,-1$; the correlation is -3, which is the minimum possible given 3 elements restricted to $\pm 1$. – MBaz Oct 14 '19 at 19:36 • @MBaz ah, it's the minimum correlation coefficient, but maximally correlated, I'd say; I'd say "correlation" is a measure on the absolute of the correlation coefficient, because, from a purely practical point of view, two signals that are linked by a factor (like -1) are very correlated. Doesn't get any more correlated :) Oct 14 '19 at 19:51 • Marcus, I agree that a very large negative correlation coefficient means that the signals are highly correlated. In the context of this problem, though, it might prove interesting to interpret "minimize the correlation" as "obtaining the actual smallest correlation coefficient", without taking absolute values. – MBaz Oct 14 '19 at 20:15 • good point, let's ask OP, maybe I also just misunderstood him or her Oct 14 '19 at 20:43 Two codewords $$c_1$$ and $$c_2$$ of length $$n$$, with elements in $$\lbrace +1, -1 \rbrace$$, and Hamming distance $$d$$, have a cross-correlation given by $$(n-d) -d = n-2d.$$ The reason is that there are $$n-d$$ bits that are equal and their product is $$1$$, and $$d$$ bits that are different and their product is $$-1$$. Note that: • The larger the distance $$d$$, the smaller the correlation. • The correlation by itself does not tell you anything about the distance, since you also need to know $$n$$. For example, $$1,1,1$$ and $$-1,-1,-1$$ have correlation $$-3$$ but $$1,1,1,1,1,1$$ and $$1,1,1,-1,-1,-1$$ have correlation $$0$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947585225105286, "perplexity": 379.08454910233036}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00373.warc.gz"}
http://math.stackexchange.com/questions/339621/surface-area-of-an-elliptic-paraboloid
# Surface area of an elliptic paraboloid The elliptic paraboloid has height h, and two semiaxes a, b. How to find its surface area? Does it possible to use a direct formula without integrals? - ## 2 Answers The elliptic paraboloid is represented parametrically as follows: $$x=a \sqrt{u} \cos{v}$$ $$y=b \sqrt{u} \sin{v}$$ $$z=u$$ The surface area of this object is given by $$\int_0^h du \: \int_0^{2 \pi} dv \: \sqrt{E \,G - F^2}$$ where the 1st fundamental form is given by $$E=1+\frac{a^2 \cos^2{v} + b^2 \sin^2{v}}{4 u}$$ $$F=\frac{1}{4} (b^2-a^2) \sin{2 v}$$ $$G = (a^2 \sin^2{v}+b^2 \cos^2{v}) u$$ The integral simplifies to $$\int_0^h du \: \left [\sqrt{b^2 \left(a^2+4 u\right)} E\left(\frac{4 \left(b^2-a^2\right) u}{b^2 \left(a^2+4 u\right)}\right)+\sqrt{a^2 \left(b^2+4 u\right)} E\left(\frac{4 (a-b) (a+b) u}{a^2 \left(b^2+4 u\right)}\right)\right ]$$ where $E$ is the elliptic integral defined as $$E(m) = \int_0^{\pi/2} dx \sqrt{1-m \sin^2{x}}$$ This is about the best you'll do as far as I can see. - The best but still very complex :) –  Ivan Bunin Mar 24 '13 at 15:51 @IvanBunin: surface area is a very complex topic in general. I didn't do SA calculations for surfaces other than figures of revolution (and simple objects like cubes and prisms) until I took differential geometry. –  Ron Gordon Mar 24 '13 at 15:54 @IvanBunin: you should at this point accept one of the solutions if you found it useful. I know you think mine is complex, but I challenge you to find something simpler. (Arc length and surface area of the ellipse are the reason the elliptic integrals have their names.) –  Ron Gordon Apr 2 '13 at 13:19 When one takes the parameter $m$ as argument, you usually don't square it before multiplying by $\sin^2$; similarly, one uses the square if one is taking the modulus $k$ as the argument for the complete elliptic integral. See this for instance. –  J. M. Apr 3 '13 at 6:20 At the very least, @Ron, Mathematica and A&S both use the parameter convention (which also happens to be my preference), while Maple and the DLMF use the modulus convention. So yes, care ought to be taken... :) –  J. M. Apr 3 '13 at 9:51 The function in question is $z=\frac{x^2}{a^2}+\frac{y^2}{b^2}$. It is more convenient to work in polar coordinates, so that $x=a r \cos \theta,y=b r\sin \theta,z=r^2$, and the parameters vary like so: $0\leq \theta \leq 2\pi,0 \leq r \leq \sqrt{h}$. You can find the surfrace area according the the formula $A=\int_0^{2 \pi} d \theta \int_0^\sqrt{h} dr \| {\bf x}_r \times {\bf x}_\theta \|$, where ${\bf x}=(x,y,z)$. I couldn't evaluate this integral analytically. - What does $\| {\bf x}_r \times {\bf x}_\theta \|$ means? –  Ivan Bunin Mar 24 '13 at 15:48 It is the norm (length) of the cross product of $\frac{\partial {\bf x}}{\partial r}$ and $\frac{\partial {\bf x}}{\partial \theta}$ –  user1337 Mar 24 '13 at 16:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381604194641113, "perplexity": 776.0557499235797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246649234.62/warc/CC-MAIN-20150417045729-00075-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/holomorphic-function-reduces-to-a-polynomial.432094/
# Homework Help: Holomorphic function reduces to a polynomial 1. Sep 25, 2010 ### snipez90 1. The problem statement, all variables and given/known data Let f: C -> C be a holomorphic function such that there is a constant R such that |z| > R implies |f(z)| > R. Show that f is a polynomial. 2. Relevant equations Not sure, I pulled this randomly from a complex analysis qualifying exam. 3. The attempt at a solution So from experience a typical way to show that a holomorphic function is a polynomial is to apply Cauchy estimates (e.g. the immediate estimates from the Cauchy integral formula). However that approach doesn't seem to work here, since we usually have to let the boundary circle in the Cauchy integral formula either get larger and larger or smaller and smaller. To me it's not clear how the given growth condition gives estimates. I've also thought about the maximum modulus principle, but I don't how to use it well, even if it does apply here. Can someone provide a hint? Thanks in advance. 2. Sep 25, 2010 ### Office_Shredder Staff Emeritus You might want to look at 1/f(z). It's bounded, except for the possibility of poles inside |z|<R. Try to make a new holomorphic function from this 3. Sep 25, 2010 ### snipez90 All right thanks. I did consider 1/f, but erroneously thought of Liouville. I'll try your suggestion. 4. Sep 25, 2010 ### Office_Shredder Staff Emeritus Well, Liouville will come into play. But first you need to find a slightly different function that's actually bounded
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818541407585144, "perplexity": 527.213029511971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00235.warc.gz"}
http://mathhelpforum.com/calculus/25475-integration-one-right-i-m-confused.html
# Math Help - Integration, which one is right? (I'm confused) 1. ## Integration, which one is right? (I'm confused) I'm now little bit confused. I have textbook that says: $\int \! x \left( {x}^{2}+1 \right) ^{2}{dx}\$ $=\frac{1}{6}\left(x ^{2}+1\right)^{3}\$ Ok, I'll do it my way and Integrate function by expand polynom first: $\int \! x \left( {x}^{2}+1 \right) ^{2}{dx}\$ $=\mathop{\rm }\int \left(x ^{5}+2\mathop{\rm }x ^{3}+x \right)\mathop{\rm } dx$ $=\mathop{\rm }\frac{x ^{6}}{6}+\frac{x ^{4}}{2}+\frac{x ^{2}}{2}$ I get different integral function. let x=3 then first integral function gives result: 500/3=166,6666666... but seconds gives 333/2 = 166,5. So, which one is right? 2. They differ by a constant C=1/6 $\frac{(x^{2}+1)^{3}}{6}=\frac{x^{6}}{6}+\frac{x^{4 }}{2}+\frac{x^{2}}{2}+\frac{1}{6}$ 3. Originally Posted by tabularasa I'm now little bit confused. I have textbook that says: $\int \! x \left( {x}^{2}+1 \right) ^{2}{dx}\$ $=\frac{1}{6}\left(x ^{2}+1\right)^{3}\$ Never miss the constant of integration! If you want to know how the book did it, try the substitution $u = x^2$ 4. Oh, clever. Thanks guys! 5. Originally Posted by Isomorphism Never miss the constant of integration! If you want to know how the book did it, try the substitution $u = x^2$ Ok, I try to substitute u = x^2 Therefore I get: u = x^2 x = u^(1/2) du = 2x dx = du/2 ok, then function looks like: $ \int \! x \left( u+1 \right) ^{2}\frac{du}{2}\ $ or should it be: $ \int \! u^{\frac{1}{2}} \left( u+1 \right) ^{2}\frac{du}{2}\ $ Could you help me little bit on this..what next or is it already wrong? 6. Originally Posted by tabularasa I'm now little bit confused. I have textbook that says: $\int \! x \left( {x}^{2}+1 \right) ^{2}{dx}\$ Let $u = x^2+1$ $\,du = 2x\,dx\Rightarrow \,dx = \frac{\,du}{2x}$ So the integral becomes $\int x\cdot u^2 \cdot \frac{\,du}{2x}$ $=\int u^2 \cdot \frac{\,du}{2}$ $=\frac{1}{2}\int u^2 \,du$ $=\frac{1}{2} \left(\frac{u^3}{3}\right)+C$ $=\frac{1}{2}\frac{(x^2+1)^3}{3}+C$ $=\frac{1}{6}(x^2+1)^3+C$ 7. Originally Posted by DivideBy0 $\,du = 2x\,dx\Rightarrow \,dx = \frac{\,du}{2x}$ So the integral becomes $\int x\cdot u^2 \cdot \frac{\,du}{2x}$ Thank you! This was the part i've totally forgot with substitution. X to the bottom Now I see the light too. 8. I have one more question. I want to integrate function x*(x+1)^3 with substitution u = (x+1)^3 How it continues? 9. Originally Posted by tabularasa I have one more question. I want to integrate function x*(x+1)^3 with substitution u = (x+1)^3 How it continues? Substitute: $u = x+1$. The integrand then becomes $(u-1)u^3$ which simplifies to $u^4 - u^3$. I think you can handle it from here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929925203323364, "perplexity": 2222.3866901737397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011064849/warc/CC-MAIN-20140305091744-00085-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/116970-trig-graphing-i-think.html
# Math Help - Trig graphing(I think) 1. ## Trig graphing(I think) I think I'm suppose to graph these but I don't know how 1) Y=5sin1/3x 2) Y=Sin2x-4 3) Y=1/2Sin2(X-90degrees)+1 4) Y=Sin(3x-120)+1 5) Y=4Sin(1/2x+30degrees)-1 Can anyone help me? 2. Originally Posted by Draft I think I'm suppose to graph these but I don't know how 1) Y=5sin1/3x 2) Y=Sin2x-4 3) Y=1/2Sin2(X-90degrees)+1 4) Y=Sin(3x-120)+1 5) Y=4Sin(1/2x+30degrees)-1 Can anyone help me? HI There are too many questions here . I will do 1 and 2 and you will try the rest . Ok ? For (1) , always refer to the original sin graph , $y=\sin \theta$ . Note that For all sin graph here , it will start from 0 and end at 360 . One cycle here is $360^o$ . 1/3 of the cycle would be $120^o$ so your graph should stop at $120^o$ . Then label this $120^o$ as $360^o$ . Notice that the amplitude is 5. For (2) , sin 2x : this will go 2 rounds . -4 : shift the graph downwards for 4 . HI There are too many questions here . I will do 1 and 2 and you will try the rest . Ok ? For (1) , always refer to the original sin graph , $y=\sin \theta$ . Note that For all sin graph here , it will start from 0 and end at 360 . One cycle here is $360^o$ . 1/3 of the cycle would be $120^o$ so your graph should stop at $120^o$ . Then label this $120^o$ as $360^o$ . Notice that the amplitude is 5. For (2) , sin 2x : this will go 2 rounds . -4 : shift the graph downwards for 4 .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917064368724823, "perplexity": 896.8730223476099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410665301782.61/warc/CC-MAIN-20140914032821-00107-ip-10-234-18-248.ec2.internal.warc.gz"}
https://brilliant.org/problems/evening-square/
# Collapsing Angle Geometry Level 2 In the square above, $E$, $F$ and $G$ are midpoints of $\overline{DC}$, $\overline{BC}$ and $\overline{FE}$, respectively. If $\angle EGD=x$, find the value of $\tan x$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687637090682983, "perplexity": 486.48606682813323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00260.warc.gz"}
https://math.stackexchange.com/questions/4363699/projective-line-construction-of-exceptional-steiner-system
# Projective line construction of exceptional Steiner system. Wikipedia gives a beautiful construction of the $$S(5,6,12)$$ Steiner system; take as blocks the subset of the projective line over the field with $$11$$ elements consisting of $$\{\infty,1,3,4,5,9\}$$ and all its images under the natural action of $$PSL_2(11)$$. I am hoping to find a slick verification that this does indeed form such a Steiner system, that is, that any $$5$$ elements are contained in a unique one of the above blocks. Here are my thoughts so far: note that the 6 elements in our starter block are just the non zero squares and $$\infty$$, so that it is stabilized in $$PSL_2(11)$$ by at multiplication by squares. As $$PSL_2(11)$$ has order $$660$$, this shows there are at most $$660/5 = 132$$ blocks. In fact the stabilizer of a block has size only $$5$$, which I can verify but in a slightly computational way. If one does that, it suffices to show no two distinct blocks intersect in $$5$$. Another helpful fact is that $$PGL_2(11)$$ acts $$3$$ transitively on the $$12$$ points of the projective line, and $$PSL_2(11)$$ can at least map any ordered $$3$$ to any other unordered $$3$$ (in $$3$$ ways), which seems helpful, but I can't quite show what I want from this. Wikipedia gives a reference to this statement, but it doesn't contain a proof. I care about this result as I know a handful of constructions of $$S(5,6,12)$$ already, as well as its uniqueness, thus understanding result will further shed light on this Steiner system's symmetries, which form the sporadic group $$M_{12}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562234282493591, "perplexity": 170.87677886145644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00604.warc.gz"}
https://mathoverflow.net/questions/293942/configuration-spaces-ran-spaces-free-semilattices-vietoris-spaces-and-power-o
# Configuration spaces, Ran spaces, free semilattices, Vietoris spaces and power objects These are five important constructions and I would like to know how they are related. The $n$th unordered configuration space of a space $X$ is $$\operatorname{UConf}_n(X):=\{\text{embeddings of \{1,...,n\} into X}\}/(\text{nth symmetric group}),$$ topologized as a subquotient of $X^n$. The Ran space of $X$ is the set $\operatorname{Ran}(X)$ of finite subsets of $X$ with the topology generated by sets $$\nabla_{U_1,...,U_n}:=\{S\in\operatorname{Ran}(U_1\cup\cdots\cup U_n)\mid S\cap U_i\ne\varnothing, i=1,...,n\}$$ where $U_i$ are disjoint open subsets of $X$. The free topological semilattice $\operatorname{Sl}(X)$ on $X$ is the value on $X$ of the left adjoint to the forgetful functor from topological semilattices to topological spaces. The Vietoris space $\mathscr VX$ of $X$ is the set of some (depending on the context) subsets of $X$ topologized by the same kind of $\nabla_{U_1,...,U_n}$ except that they are not required to be disjoint. Finally, one may choose some nice embedding $I$ of some subcategory of spaces that contains $X$ into a topos in various ways, and consider there the power object $\Omega^{IX}$. Usually it is not in the image of $I$. There are versions like $\operatorname{Sub}_{\mathrm{fin}}(IX)\rightarrowtail\Omega^{IX}$ of objects of finite (say, Kuratowski finite) subobjects of $IX$ which might be. (Note that $\operatorname{Sub}_{\mathrm{fin}}$, with Kuratowski finiteness, is the free internal semilattice functor on any topos whatsoever.) As a variation on the latter - say, $X$ is a simplicial set; since simplicial sets readily form a topos we have simplicial sets $\operatorname{Sub}_{\mathrm{fin}}(X)\rightarrowtail\Omega^X$. Questions: 1. Is $\operatorname{UConf}_n(X)$ (homeomorphic to) a subspace of $\operatorname{Ran}(X)$? 2. There is a topology on $\bigcup_n\operatorname{UConf}_n(X)$ with $\{x_1,...,x_n,x_{n+1}\}$ close to $\{x_1,...,x_n\}$ when $x_{n+1}$ is close to $x_n$ in $X$. Is this homeomorphic to $\operatorname{Ran}(X)$? 3. The same two questions with $\operatorname{Sl}$ in place of $\operatorname{Ran}$. 4. Is $\operatorname{Ran}(X)$ homeomorphic to $\operatorname{Sl}(X)$? 5. Are $\operatorname{Ran}(X)$, $\operatorname{UConf}_n(X)$ or $\operatorname{Sl}(X)$ subspaces in $\mathscr VX$ for some nice spaces $X$? 6. Are there known embeddings of some categories of spaces into toposes such that the image of the embedding is closed under taking $\operatorname{Sub}_{\mathrm{fin}}$? In particular, can $\operatorname{Sub}_{\mathrm{fin}}(IX)$ be isomorphic to $I(\operatorname{Sl}(X))$ for some such $I$? 7. How does the geometric realization of $\operatorname{Sub}_{\mathrm{fin}}(X)$ relate to $\operatorname{Ran}$, $\operatorname{UConf}_n$, $\operatorname{Sl}$ and $\mathscr V$ of the geometric realization of $X$ for a simplicial set $X$? $\ \ \,$0.$\$Are these considered together and compared somewhere in the literature? Too long for a comment but it is essentially a comment: It is easy to see that for a Hausdorff space $X$ the topology on the Ran space coincides with the Vietoris topology and for a non-Hausdorff space $X$ the Ran topology is strictly weaker than the Vietoris topology. The topology of the free topological semilattice is stronger than the Vietoris topology. For example, for an infinite compact metrizable space $K$ the space $SL(K)$ is a non-metrizable $k_\omega$-space whereas the Vietoris topology is metrizable. So, $SL(K)$ even topologically does not embed into $\mathcal V X$. By the way, the hyperspace $\mathcal V X$ of non-empty finite subsets of a topological space $X$ endowed with the Vietoris topology coincides with the free Lawson topological semilattice of $X$. For a Hausdorff space $X$ the configuration space $\mathrm{UConf}_n(X)$ naturally embeds into the free (Lawson) topological semilattice. For the free Lawson semilattice $\mathcal V X$ this can be shown by comparing the Vietoris (or Ran) topology with the quotient topology on $\mathrm{UConf}_n(X)$. Then combining this with the continuity of the natural maps $\mathrm{UConf}_n(X)\to SL(X)\to \mathcal V X$, we can conclude that $\mathrm{UConf}_n(X)\to SL(X)$ is an embedding, too. Concerning the literature on the free (locally convex) topological semilattices (at least), you can look at the following papers and references therein: Banakh and Sakai - Free topological semilattices homeomorphic to $\mathbb R^\infty$ or $\mathbb Q^\infty$ Banakh, Guran, and Gutik - Free topological inverse semigroups As I understand in question (2) on the space $\bigcup_{n\in\mathbb N}\mathrm{UConf}_n(X)$ it is considered the topology of direct limit of the tower $exp_n(X)$ where $exp_n(X)$ is the family of all at most $n$-element subsets of $X$ endowed with the Vietoris (or Ran) topology. This topology is stronger than the topology of $SL(X)$ and I am afraid that two topologies coincide only for $k_\omega$-spaces. For general spaces the operation of taking union is discontinuous with respect to this direct limit topology, so it is not a topological semilattice. This follows from Proposition 4, p. 35, of Banakh, Guran, and Gutik - Free topological inverse semigroups. This proposition says that if for a functionally Hausdorff space $X$ the free topological semilattice $SL(X)$ is a $k$-space, then each closed metrizable subspace of $X$ is locally compact. • If this is essentially a comment, it is a very essential one! – მამუკა ჯიბლაძე Feb 27 '18 at 9:32 • @მამუკაჯიბლაძე Thank you. By the way, I looked at (en.wikipedia.org/wiki/Ran_space) for the Ran space and found there mentionong the Beilinson-Drinfeld Theorem on the weak contractibility of the Ran space. It should be mentined that the weak contractibility (= triviality of all homotopy groups) of topological semilattices is a well-known and old fact. So, what was actually proved by Beilinson and Drinfeld? Only non-Hausdorff case (important in Algebraic Geometry)? – Taras Banakh Feb 27 '18 at 9:38 • Concerning Beilinson-Drinfeld Theorem on weak contractibiliy of the Ran space of a connected manifold. This theorem follows (trivially) from the known fact that any path-connected topological semilattice has trivial homotopy groups. And this fact follows from the known fact that the circle is an $exp_3$-valued retract of the disk. I do not remember who first noticed the existence of such retraction but Dranishnikov applied it in 80-ies in his papers devoted to functors. So, what was actually proved by Beilinson and Drinfeld? – Taras Banakh Feb 27 '18 at 10:08 • Trying to find more information on the Ran space, I looked at the paper (arxiv.org/pdf/1608.07472.pdf) and already on page 3 found a topological gap. The author writes that the Ran space of any topological space is a topological semilattice with respect to the topology of the direct limit, but this contradicts Proposition 4 in the paper of Banakh-Guran-Gutik. It is interesting how many such misbeliefs there are in Algebraic Geometry (in topologycal algebra also there was some time when people believed that the free topological group carries the topology of dircet limit). – Taras Banakh Feb 27 '18 at 10:52 • Sorry I am confused by your fifth paragraph. Do you want to say that the free topological semilattice carries Lawson topology, but is smaller than the value of the left adjoint to the forgetful functor from Lawson semilattices to spaces? Btw is it known that this adjoint exists? – მამუკა ჯიბლაძე Feb 27 '18 at 11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238756895065308, "perplexity": 245.6777715733718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00121.warc.gz"}
https://en.khanacademy.org/math/integral-calculus/ic-series/ic-lagrange-error-bound/v/lagrange-error-bound-for-sine-function
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. ## Integral Calculus ### Course: Integral Calculus>Unit 5 Lesson 12: Lagrange error bound # Worked example: estimating sin(0.4) using Lagrange error bound AP.CALC: LIM‑8 (EU) , LIM‑8.C (LO) , LIM‑8.C.1 (EK) Lagrange error bound (also called Taylor remainder theorem) can help us determine the degree of Taylor/Maclaurin polynomial to use to approximate a function to a given error bound. See how it's done when approximating the sine function. ## Want to join the conversation? • The MacLaurin expansion of sin(x) is x - x^3/3! + x^5/5! - x^7/7! + x^9/9! ... The polynomial of degree 4 is actually identical to the the polynomial of degree 3 because the coefficient of x^4 is 0. Shouldn't the answer to the exercise be 3 instead of 4? • While what you said is technically true, what Sal is doing in the is using the error function to evaluate the absolute maximum error bound of the function. You are not supposed to know before hand that the coefficient fourth degree polynomial is 0, the error function is basically just telling you to what degree polynomial you need to take Maclaurin polynomial to be completely sure that the error is within certain bounds. It is always gonna be an overestimate since it is taking the absolute maximum that the error could be, not what actually is. • I agree with Sal that no matter how many times we take the derivative of sin, it's absolute value will always be between 0 and 1 and so the value of "M" will be 1 but isn't Sal ignoring the fact that we have to only consider an open interval containing 0 and 0.4, in that case if the n+1'th derivative of sinx is (+or-)sinx again depending on value of 'n' then if we look in the interval of 0 and 0.4 then the absolute value of sinx will not lie between 0 and 1 but it will lie between 0 and sin(0.4).......... So isn't something WRONG here?? • SIn(0.4 ) = 0.389..(the range of M is from 0 to 0.389) and cos(0.4) = 0.921...(the range is from 1 to 0.921). The (n+1)th derivative of f(x) could be either (+or-)cosx or (+or-)sinx. But we are sure that the value of M is always less than 1. *Edit:* What we have to take for M is the maximum possible value in the interval (x=0 to x=0.4). But we are not sure if its cosx or sinx in the n+1 th derivative. So we take M=1. It is gonna be less precise if it is sinx and more precise if it is cosx. • We know that sin x is < or = to 1 over the whole domain, but might one find a smaller M for the open interval between -0.0001 and 0.40001? If we use an M larger than necessary, could it cause us to specify a higher degree polynomial than necessary? • That's right. I solved the problem for this specific interval and the minimal degree is 3 and not 4. • how do we solve the final inequality Sal arrives at (at ) without trial and error? do we just take log on both sides and solve for n? • Because of the factorial and there being "n" both as an exponent and outside of the exponent, I do not think there is a way to solve it without trial and error. • wait so i am confusion why is the M equal to one in this particular situation? i know that the graph of sine is bounded by 1, so does that make M always equal to 1 in these kind of problems? • Because whatever x is, sin(x) and cos(x) is always bounded by 1, yes, it would make M equals 1 in this kind of problems. You might think sin(x) on (0, 0.4) much less than 1 but the derivative of sin(x) is ccos(x) which has quite close value to 1 in the above interval. Whereas this is not really precise, it's good enough • In the previous video "Taylor polynomial remainder (part 2)" the right hand side of the remainder equation is without the absolute value, while in @ the right hand side is with the absolute value. Why? • In the previous video the remainder theorem formula is shown to have (x-a) and not just x over an interval between a and b. I am assuming it is just x because the interval is between 0 and 0.4, hence, a is 0? • Actually, it's because we're using the Maclaurin polynomial • Where does the condition that n+1th derivative of f(x) has to be less than or equal to M for an open interval including 0 and x come from? Why won't this work for other intervals? • Te open interval including 0 and x comes from just wanting to make sure the function is differentiable in that range. say you had 1/x as the function, or some other rational expression. You wouldn't be able to use any portion that has a vertical asymptote. Anyway, in the intro videos to the error/ remainder Sal goes through to show where that entire term comes from. Specifically in this video. https://www.khanacademy.org/math/ap-calculus-bc/bc-series-new/bc-10-12/v/proof-bounding-the-error-or-remainder-of-a-taylor-polynomial-approximation If that doesn't make sense though I can try to explain. I reccomend watching through both parts though.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202081918716431, "perplexity": 406.98120118864887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00640.warc.gz"}
https://www.physicsforums.com/threads/a-force-question.234637/
# A Force Question 1. May 12, 2008 ### inflames829 1. The problem statement, all variables and given/known data A force F=-4.3i+2.0j+1.5k Newtons exerts a displacement of d=-0.8i+1.8j+-3.6k metres upon a small (ie mass is negligible) object. If the friction force is equal to Ffr=-0.2,, then what is the net work done upon the object? 2. Relevant equations 3. The attempt at a solution I know the net work is the sum of all the individual work and the work equation is W=F.D. So i tried to work out the work of the initial force stated by multiplying the force and displacement stated which i got 1.64J. I think the only other work is caused by the friction force but when i try to work that out (because the mass is negligable) i get 0. Can someone please help? 2. May 12, 2008 ### Tedjn Mass is negligible only affects calculating the friction force using $\mu N$, but you already have the friction force. What can you do with it? 3. May 12, 2008 ### inflames829 could i put it as the force in the work equation and put the distance as the displacement thats stated in the question? 4. May 12, 2008 ### Phlogistonian Yes. 5. May 12, 2008 ### inflames829 so i have 1.64 from the first force i worked out and if i put the friction force into the work equation w= -0.2 x D i get -0.82. If I add the two works i have i get 0.82, but the answer is 1.3.... can you give me hints on what ive done wrong?? 6. May 12, 2008 ### Tedjn I get that also. From where did you get this problem? Similar Discussions: A Force Question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851386547088623, "perplexity": 792.9350883687313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608984.81/warc/CC-MAIN-20170527171852-20170527191852-00190.warc.gz"}
http://bayesianthink.blogspot.com/2012/11/fun-with-uniform-random-numbers.html
## Friday, November 16, 2012 ### Fun with Uniform Random Numbers Q: You have two uniformly random numbers x and y (meaning they can take any value between 0 and 1 with equal probability). What distribution does the sum of these two random numbers follow? What is the probability that their product is less than 0.5. The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists A: Let z = x + y be the random variable whose distribution we want. Clearly z runs from 0 to 2. Let 'f' denote the uniform random distribution between [0,1]. An important point to understand is that f has a fixed value of 1 when x runs from 0 to 1 and its 0 otherwise. So the probability density for z, call it P(z) at any point is the product of f(y) and f(z-y), where y runs from 0 to 1. However in that range f(y) is equal to 1. So the above equation becomes From here on, it gets a bit tricky. Notice that the integral is a function of z. Let us take a look at how else we can simply the above integral. It is easy to see that f(z-y) = 1 when (z-y) is between [0,1]. This is the same as saying Likewise, f(z-y) = 1 when y is lesser than z and greater than 0. ie Combining the two cases above results in a discontinuous function as which is a triangular function. Now that we done with the sum, what about the product xy? A quick way to go about it is to visualize a 2 dimensional plane. All the points (x,y) within the square [0,1]x[0,1] fall in the candidate space. The case when xy = 0.5 makes a curve The area under the curve would represent the cases for which xy <= 0.5 (shown shaded below). Since the area for the square is 1, that area is the sought probability. The curve intersects the square at [0.5,1] and [1,0.5]. The area under the curve would the be sum of the 2 quadrants (1/4 each) along with the integral of y = 0.5/x under the range 0.5 to 1 yielding \begin{align*} P(xy\lt 0.5) &= \frac{1}{2} + \int_{0.5}^{1}\frac{0.5}{x}dx \\ &= \frac{1}{2} + \frac{1}{2}ln2 \approx 0.85 \end{align*} If you are looking to buy some books in probability here are some of the best books to learn the art of Probability Here are a few Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics) This book is a great compilation that covers quite a bit of puzzles. What I like about these puzzles are that they are all tractable and don't require too much advanced mathematics to solve. Introduction to Algorithms This is a book on algorithms, some of them are probabilistic. But the book is a must have for students, job candidates even full time engineers & data scientists Introduction to Probability Theory An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!) Introduction to Probability, 2nd Edition The Mathematics of Poker Good read. Overall Poker/Blackjack type card games are a good way to get introduced to probability theory Let There Be Range!: Crushing SSNL/MSNL No-Limit Hold'em Games Easily the most expensive book out there. So if the item above piques your interest and you want to go pro, go for it. Quantum Poker Well written and easy to read mathematics. For the Poker beginner. Bundle of Algorithms in Java, Third Edition, Parts 1-5: Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition) (Pts. 1-5) An excellent resource (students/engineers/entrepreneurs) if you are looking for some code that you can take and implement directly on the job. Understanding Probability: Chance Rules in Everyday Life A bit pricy when compared to the first one, but I like the look and feel of the text used. It is simple to read and understand which is vital especially if you are trying to get into the subject Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems) This one is a must have if you want to learn machine learning. The book is beautifully written and ideal for the engineer/student who doesn't want to get too much into the details of a machine learned approach but wants a working knowledge of it. There are some great examples and test data in the text book too. Discovering Statistics Using R This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161534667015076, "perplexity": 569.6860427639293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647914.3/warc/CC-MAIN-20141024030047-00277-ip-10-16-133-185.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1168.14030
× # zbMATH — the first resource for mathematics Nondense subsets of varieties in a power of an elliptic curve. (English) Zbl 1168.14030 Let $$A$$ be a $$g$$-dimensional abelian variety over $$\bar{\mathbb{Q}}$$, and $$V$$ be its proper $$d$$-dimensional irreducible algebraic subvariety. $$V$$ is said to be transverse (resp. weak-transverse) if $$V$$ is not contained in any translate of a proper algebraic subgroup of $$A$$ (resp. in any proper algebraic subgroup of $$A$$). Consider the sets $$S_r(V,F)=V\cap\bigcup_{\mathrm{cod} B \geq r}(B+F)$$, where $$B$$ runs over all abelian subvarieties of $$A$$ of codimension at least $$r$$, $$1 \leq r \leq g$$, and $$F$$ is a subset of $$A$$. The author considers the case where $$A = E^g$$ is a power of an elliptic curve $$E$$. Set $$\mathcal{O}_\varepsilon = \{\xi \in E^g \; : \; \|\xi\| \leq \varepsilon\}$$, where $$\varepsilon \geq 0$$, $$\| \cdot \|$$ is a fixed semi-norm on $$E^g$$ induced by the Néron-Tate height on $$E$$, $$\Gamma_\varepsilon = \Gamma + \mathcal{O}_\varepsilon$$, where $$\Gamma$$ is a subgroup of finite rank in $$E^g$$. Let $$V \subset E^g$$ be an irreducible algebraic subvariety of $$E^g$$, and $$V_K = V \cap \mathcal{O}_K$$. With this notation the main result of the paper asserts: for every $$K \geq 0$$ there exists an effective $$\varepsilon \geq 0$$ such that: (i) if $$V$$ is weak-transverse, then $$S_{d+1}(V_K, \mathcal{O}_\varepsilon)$$ is Zariski nondense in $$V$$; (ii) if $$V$$ is transverse, then $$S_{d+1}(V_K, \Gamma_\varepsilon)$$ is Zariski nondense in $$V$$. The author proves firstly that the assertions (i) and (ii) are equivalent. Then the proof of (ii) consists of three steps. The first two steps allow to avoid $$\Gamma$$ and to approximate an algebraic subgroup with a subgroup of bounded degree. The third step shows that certain special sets are Zariski nondense in $$V \times \gamma$$, where $$\gamma$$ is a maximal free set of the division group of $$\Gamma$$. Here the proof is based on an essentially optimal Bogomolov-type bound for the normalized height of a transverse subvariety in $$E^g$$. There is a conjecture asserting that there exist $$\varepsilon>0$$ and a nonempty Zariski open subset $$V^u$$ of $$V$$ such that if $$V$$ is weak-transverse (resp. transverse), then $$S_{d+1}(V^u, \mathcal{O}_\varepsilon)$$ (resp. $$S_{d+1}(V^u,\Gamma_\varepsilon)$$) has bounded height. The paper concludes with the proof of a special case of this conjecture. ##### MSC: 14K12 Subvarieties of abelian varieties 11G05 Elliptic curves over global fields 11G50 Heights 14H25 Arithmetic ground fields for curves Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859799742698669, "perplexity": 133.56417252419374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00473.warc.gz"}
http://math.stackexchange.com/questions/73456/after-saw-this-piece-of-discussion-i-ask-myself-what-is-the-most-rigorous-defin
# After saw this piece of discussion, i ask myself what is the most rigorous definition of the circle, but i can't figure it out Discussion: Is value of $\pi$ = 4? so what is the "real definition" of a circle? i think the original solution from wikipedia is too ambigous, i couldn't find why the circumference of the circle is 2*r*pi=3.14... if we really use the method that i provide from the link how are we able to get an infinite series of pi and find the circumference of an ellipse? - Is en.wikipedia.org/wiki/Circle not clear? To quote: "a circle is the set of points in a plane that are a given distance from a given point, the centre". –  lhf Oct 17 '11 at 21:21 The standard notion of a circle is a set of all points in a plane equidistant from some fixed point in that plane (its center). By distance we mean standard Euclidean distance. So a circle in $\mathbb{R}^2$ is a set $\{ (x,y) \in \mathbb{R}^2 \,|\ \mathrm{distance}((x,y),(a,b))=r \}$ for some fixed point $(a,b) \in \mathbb{R}^2$ (the center of the circle) and some fixed (positive) distance $r \in \mathbb{R}$, $r>0$ (the radius). Euclidean distance is given by the formula: $\mathrm{distance}((x,y),(a,b)) = \sqrt{(x-a)^2+(y-b)^2}$. Thus squaring both sides of the equation "$\mathrm{distance}((x,y),(a,b))=r$" gives us $$\{ (x,y) \in \mathbb{R}^2 \,|\ (x-a)^2+(y-b)^2=r^2 \}$$ This is the circle with radius $r$ and center $(a,b)$. If we wish to discuss a circle in $\mathbb{R}^n$, then we need to specify the plane in which it lies. The circle with center ${\bf c}$ and radius $r>0$ which lies in the plane $({\bf x} - {\bf p}) {\bf \cdot} {\bf n}=0$ (the plane through the point ${\bf p}$ with normal vector ${\bf n}$) is given by $\{ {\bf x} \in \mathbb{R}^n \,|\ ({\bf x}-{\bf p}){\bf \cdot}{\bf n}=0 \mbox{ and } |{\bf x}-{\bf c}|=r \}$. Again $|{\bf x}-{\bf c}|$ is the standard Euclidean norm. If you mess with the definition of distance, then you can still call the corresponding set a "circle", but it's not a circle is the standard "classical" sense. - That discussion had nothing to do with the definition of "circle", and everything to do with the definitions of "length" and "limit". But at any rate, the definition of a circle (in the plane, $\mathbb{R}^2$) is a subset of the form $$\{(x,y)\in\mathbb{R}^2\mid (x-h)^2+(y-k)^2=r\}$$ where $r\in\mathbb{R}$, $r>0$ and $(h,k)\in\mathbb{R}^2$ is any point. The circumference of a circle of radius $r$ is $2\pi r$ essentially by the definition of $\pi$ (so unless your circle has radius $\frac{1}{2}$, it doesn't have circumference $\pi$...). See here or here for an explanation of why this actually defines a single number $\pi$ for all circles. - Here's a totally rigorous definition. Definition. For all $C \subseteq \mathbb{R}^2,$ we call $C$ a circle iff there exists $r>0$ and $c \in \mathbb{R}^2$ such that the following holds. $$\forall x \in \mathbb{R}^2(x \in C \leftrightarrow d(x,c)=r)$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677995443344116, "perplexity": 119.5845705653464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00003-ip-10-171-96-226.ec2.internal.warc.gz"}
https://homework.cpm.org/category/ACC/textbook/ccaa8/chapter/9%20Unit%2010/lesson/CCA:%209.4.1/problem/9-99
### Home > CCAA8 > Chapter 9 Unit 10 > Lesson CCA: 9.4.1 > Problem9-99 9-99. $x^2−6x+3=0$ Subtract 3 from both sides of the equation. Then add a constant to both sides to make $\left(x-3\right)^2$on the left side.
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780133724212646, "perplexity": 1899.8076389698967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00473.warc.gz"}
https://en.wikipedia.org/wiki/Parameter_identification_problem
# Parameter identification problem For a more technical treatment, see Identifiability. In statistics and econometrics, the parameter identification problem is the inability in principle to identify a best estimate of the value(s) of one or more parameters in a regression. This problem can occur in the estimation of multiple-equation econometric models where the equations have variables in common. More generally, the term can be used to refer to any situation where a statistical model will invariably have more than one set of parameters that generate the same distribution of observations, meaning that multiple parametrizations are observationally equivalent. ## Standard example, with two equations Consider a linear model for the supply and demand of some specific good. The quantity demanded varies negatively with the price: a higher price decreases the quantity demanded. The quantity supplied varies directly with the price: a higher price increases the quantity supplied. Assume that, say for several years, we have data on both the price and the traded quantity of this good. Unfortunately this is not enough to identify the two equations (demand and supply) using regression analysis on observations of Q and P: one cannot estimate a downward slope and an upward slope with one linear regression line involving only two variables. Additional variables can make it possible to identify the individual relations. In the graph shown here, the supply curve (red line, upward sloping) shows the quantity supplied depending positively on the price, while the demand curve (black lines, downward sloping) shows quantity depending negatively on the price and also on some additional variable Z, which affects the location of the demand curve in quantity-price space. This Z might be consumers' income, with a rise in income shifting the demand curve outwards. This is symbolically indicated with the values 1, 2 and 3 for Z. With the quantities supplied and demanded being equal, the observations on quantity and price are the three white points in the graph: they reveal the supply curve. Hence the effect of Z on demand makes it possible to identify the (positive) slope of the supply equation. The (negative) slope parameter of the demand equation cannot be identified in this case. In other words, the parameters of an equation can be identified if it is known that some variable does not enter into the equation, while it does enter the other equation. A situation in which both the supply and the demand equation are identified arises if there is not only a variable Z entering the demand equation but not the supply equation, but also a variable X entering the supply equation but not the demand equation: supply:    ${\displaystyle Q=a_{S}+b_{S}P+cX\,}$ demand:   ${\displaystyle Q=a_{D}+b_{D}P+dZ\,}$ with positive bS and negative bD. Here both equations are identified if c and d are nonzero. Note that this is the structural form of the model, showing the relations between the Q and P. The reduced form however can be identified easily. ## Estimation methods and disturbances "It is important to note that the problem is not one of the appropriateness of a particular estimation technique. In the situation described [without the Z variable], there clearly exists no way using any technique whatsoever in which the true demand (or supply) curve can be estimated. Nor, indeed, is the problem here one of statistical inference—of separating out the effects of random disturbance. There is no disturbance in this model [...] It is the logic of the supply-demand equilibrium itself which leads to the difficulty." (Fisher 1966, p. 5) ## More equations More generally, consider a linear system of M equations, with M > 1. An equation cannot be identified from the data if less than M − 1 variables are excluded from that equation. This is a particular form of the order condition for identification. (The general form of the order condition deals also with restrictions other than exclusions.) The order condition is necessary but not sufficient for identification. The rank condition is a necessary and sufficient condition for identification. In the case of only exclusion restrictions, it must "be possible to form at least one nonvanishing determinant of order M − 1 from the columns of A corresponding to the variables excluded a priori from that equation" (Fisher 1966, p. 40), where A is the matrix of coefficients of the equations. This is the generalization in matrix algebra of the requirement "while it does enter the other equation" mentioned above (in the line above the formulas). ## Related use of the term In engineering language, the term "parameter identification" is used to indicate a more general subject, which is roughly the same as estimation in statistics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758203983306885, "perplexity": 604.4528847985853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00492.warc.gz"}
https://www.physicsforums.com/threads/torque-req-for-fixed-rot-acc.124725/
# Torque Req. for Fixed Rot. Acc. 1. Jun 26, 2006 ### Beyond Aphelion I'm having difficulty with this question: A day-care worker pushes tangentially on a small hand-driven merry-go-round and is able to accelerate it from rest to a spinning rate of 18.0 rpm in 10.1s. Assume the merry-go-round is a disk of radius 2.30m and has a mass of 830kg, and two children (each with a mass of 25.4kg) sit opposite each other on the edge. Calculate the torque required to produce the acceleration, neglecting frictional torque. Alright, this is my process; although, I know my end result is wrong: I used the angular kinematic equation to solve for the angular acceleration. f = ωi + αΔt) I got α = 0.186629 rad/s² (approx.) after converting from rpm's. The equation I have for torque is: τ = mr²α But, since we're working with a disk, I = ½MR². Therefore, I solved for torque using the equation: τ = ½MR²α I'm moderately confident with myself at this point, although I realize I can be completely off, but I think I'm screwing up what to use for mass. I plugged in the mass of the merry-go-round plus the mass of the two children. M = 880.8 kg Most likely, this is where my reasoning is flawed. I've just recently been introduced to torque, and it is honestly confusing me. τ = ½MR²α = τ = ½(880.8 kg)(2.3)²(0.186629 rad/s²) = 434.79 N*m This is the wrong answer, I know. But it is the best I could come up with based on the information my textbook is giving me. Any advice would be helpful. 2. Jun 26, 2006 ### nrqed For a point mass, I is MR^2 (where R is the distance from the point mass to the axis of rotation). For a disk, the moment of inertia is 1/2 MR^2. What you have to do here is to calculate the total moment of inertia, with is $I_{total}=I_{child#1} + I_{child#2} + I_{disk}$ Use this for the total moment of inertia. barring any algebra mistake, this should work. Patrick Similar Discussions: Torque Req. for Fixed Rot. Acc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126288294792175, "perplexity": 1154.8846736858404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00358-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/tension-of-string-holding-one-mass-to-a-wall-on-top-of-another-moving-mass.300050/
# Tension of string holding one mass to a wall, on top of another moving mass? #### finniusmorgan 1 0 Hello! This is my first post, I am taking an introductory physics course during my second semester of college and having some trouble with this problem. I hope I have followed the correct format, I appreciate any help that can be offered! 1. The problem statement, all variables and given/known data A 4.82 kg block is placed on top of a 10.7 kg block. A horizontal force of F = 69.7 N is applied to the 10.7 kg block, and the 4.82 kg block is tied to the wall. The coefficient of kinetic friction between all moving surfaces is 0.197. There is friction both between the masses and between the 10.7 kg block and the ground. The acceleration of gravity is 9.8 m/s2 . 2. Relevant equations F=ma f= [coefficient of friction]*Fn 3. The attempt at a solution I really don't know how to find the normal forces that are used in finding the frictional force, but after that I would subtract the friction from the horizontal force to find the net force on the bottom (10.7kg) object, but I do not understand how to relate that to the tension in the string holding the second object. Thank you! #### sArGe99 133 0 Hint : The frictional force between the blocks acts in different directions for the bodies. Tension can be calculated from it.. ### The Physics Forums Way We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564910888671875, "perplexity": 743.2844048884025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204857.82/warc/CC-MAIN-20190326054828-20190326080828-00550.warc.gz"}
https://www.bartleby.com/questions-and-answers/find-the-standard-deviation-step-by-step.-that-is-find-the-deviation-of-each-observation-from-the-me/e21d97c7-4364-41e3-966d-6d422da94b73
# find the standard deviation step by step. that is find the deviation of each observation from the mean, square the deviations then obtain the variance and the standard deviation.i found the mean : 10.05i need help finding the standard deviation and the variance Question 3 views find the standard deviation step by step. that is find the deviation of each observation from the mean, square the deviations then obtain the variance and the standard deviation. i found the mean : 10.05 i need help finding the standard deviation and the variance check_circle Step 1 To find standard deviation we need to do following steps. FInd the mean. For each data point, find the square of its distance to the mean. Sum the values from Step 2. Divide by the number of data points-1. Take the square root. The formula for standard deviation (sd) is, Step 2 Step 3 Lets take one example. Suppose data set contains of si... ### Want to see the full answer? See Solution #### Want to see this answer and more? Solutions are written by subject experts who are available 24/7. Questions are typically answered within 1 hour.* See Solution *Response times may vary by subject and question. Tagged in
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475874900817871, "perplexity": 905.8006048267254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00531.warc.gz"}
http://cs.stackexchange.com/questions/6781/difference-between-reductions-in-algebraic-problems-versus-reductions-in-computa
# Difference between reductions in algebraic problems versus reductions in computational intractability When I read about NP-completeness for the first time, I really wondered why is the concept of reductions given such high emphasis, after all we have been looking at concepts such as reductions and 'special case of one another problem' in mathematics since elementary algebra. What I mean by reductions in algebra is the following. Problem 1: Find value of x such that $x^2+ax+b=0$ Problem 2: Find value of x such that $(x+m/n)^2=0$ We can go on proving both the problems are same and one solution can be translated to another. My question is that "Is the concept of reductions in computational intractability same as that in above algebraic theory?" If not, how are the reductions in CI theory different? - Cross-posted on CSTheory cstheory.stackexchange.com/q/14395/77. Please don't do that. –  Dave Clarke Nov 20 '12 at 15:05 Suppose that you knew how to solve $(x+m/n)^2 = 0$, but had no idea how to solve $x^2+ax+b=0$. Then you'd be very interested in knowing whether the two were equivalent and how to convert from one to the other! Similarly, suppose you're trying to solve $x^2+ax+b=0$. If you knew that $(x + m/n)^2=0$ could not be solved by any possible algorithm, you would again be very interested to find out that $x^2+ax+b=0$ is equivalent to it. Then you could give up on your problem because you know it can't be done. –  usul Dec 4 '12 at 22:37 Reductions in computational complexity are similar to the reduction you described, but are usually given with bounds on the time or space. To be useful, these bounds are usually inconsequential (ie. the same or smaller) to the bounds of solving the problem. Reductions in computational complexity are about converting one problem to another equivalent one. Usually one would also give bounds on how hard (how much time/space) a reduction takes as well. For example, all NP-complete problems have reductions to each-other, and their reductions are polynomially bounded. Since algorithms solving NP-complete problems are suspected to take exponential time on a deterministic Turing machine, the polynomial time reductions are inconsequential (compared with the time of actually solving the reduced problem), and thus these reductions make all of the NP-complete problems essentially equivalent. There can reductions between problems in other classes of complexity as well, and to be useful they would likely be bounded by something considered inconsequential to solving them. - Reductions are useful in studying computability not so much to prove that problems are computable (although that is also done), but to prove that problems are not computable. Used to prove incomputability, a reduction proof is a particular kind of proof by contradiction. You take your problem P and a known incomputable problem X, and you show that if you had an algorithm that computed solutions to P you could use it to compute solutions to X. Since we already know that there there does not exist a method for computing solutions to X this proves that there cannot exist a method for computing solutions to P. This isn't really "different" from the kind of reduction you're talking about in algebra; if you have an algebraic problem P and you show that a solution to P could be used to find a solution to X, and you already know that X has no solutions, then this would show that P has no solutions either. But you usually think about the notion of "problem" slightly differently in algebra, as well as about the way you use "reductions". My experience in mathematics was that you're usually aiming to turn an equation you don't know how to solve in to a form that you do know how to solve, in order to find a particular solution (possibly with unknown constants). Whereas a "problem" in computer science is a specification for producing an output (usually yes/no) on instances of an infinite family of inputs.1 And for computability purposes you're interested not in finding any particular solution, but in showing whether or not there exists an algorithm that can compute the solution for any input. Basically it has turned out that a large number of computational problems have been difficult to directly prove incomputable, but are relatively easy to reduce to known incomputable problems. Variants on the Halting Problem are particularly rich sources of undergraduate-level incomputability proofs. I'm not aware that this is done nearly so much in mathematics, hence the difference in emphasis. 1 Any problem that only has one instance (e.g. one specific algebraic formula, like 2x^8 - 9 = 83) is completely uninteresting from the point of view of computability theory, because finite problem series are all trivially computable by a stupid algorithm that simply has all the solutions hard-coded and immediately prints the one corresponding to its input. My Theory of Computation lecturer liked to joke that the problem "does God exist?" is computable; the program that computes the solution is either a single-state DFA that immediately accepts, or a single-state DFA that immediately rejects. We just don't know which one it is! - For example, say you give me a problem to solve. After spending time on it, I am unable to come up with an efficient algorithm for it. Maybe I'm too dumb to come up with an algorithm, or maybe a (fast) algorithm doesn't exist. I would like to back up my excuse somehow. If I can reduce a known hard problem $X$ to the problem you gave me, I'm showing that $X$ is at least as hard the problem you gave me. That means that maybe I'm not that dumb; other people have not been able to solve the problem either.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545082211494446, "perplexity": 324.0341267721226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010795590/warc/CC-MAIN-20140305091315-00044-ip-10-183-142-35.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/414599/how-to-detect-empty-elements-inside-a-tikz-foreach-statement
# How to detect “empty” elements inside a tikz \foreach statement I want to define a macro that feeds its contents into a tikz \foreach statement, for example: \newcommand\Macro[1]{ \foreach \x in {#1} {x=\x.} } which is then used as \Macro{1,2,3,4}. Sometimes, I want to give it "empty" arguments, such as \Macro{,,,4,5,5}. I'd like to be able to detect the "empty" arguments and do something different in such cases. I thought that something like the following would work: \newcommand\Macro[1]{% \foreach \x in {#1} { \if\relax\detokenize{\x}\relax Empty! \else x=\x. \fi } } but for some reason this does not detect the empty arguments and instead produces: I have tried many other things, such as expanding \x first, but I am yet to find anything that works. Can any one see a way of doing this? [My real code is generating a tikzpicture environment so I really want to use \foreach from tikz.] Something like this? \documentclass{article} \usepackage{tikz} \newcommand\Macro[1]{% \foreach \x in {#1} { \ifx\x\empty\relax Empty! \else x=\x. \fi } } \begin{document} \Macro{,,,4,5,5} \end{document} • Thanks marmot! I thought that I was probably being stupid and missing something obvious! – Andrew Feb 10 '18 at 3:50 The problem in the macro from the question: \newcommand\Macro[1]{% \foreach \x in {#1} { \if\relax\detokenize{\x}\relax Empty! \else x=\x. \fi } } is that \detokenize does not expand its argument and returns the two tokens \ and x. This is cured by adding an \expandafter: \detokenize\expandafter{\x} The full macro: \newcommand\Macro[1]{% \foreach \x in {#1} { \if\relax\detokenize\expandafter{\x}\relax Empty! \else x=\x. \fi } } The mandatory expl3 answer (after noting that \detokenize\expandafter{\x} would be the solution): \documentclass{article} \usepackage{tikz} \usepackage{xparse} \ExplSyntaxOn \NewExpandableDocumentCommand{\blankTF}{mmm} {% #1 = text to test, #2 = blank case, #3 = non blank case \str_if_eq_x:nnTF { #1 } { } { #2 } { #3 } } \NewDocumentCommand{\lforeach}{O{,}mm} {% #1 = delimiter, #2 = list, #3 = code \seq_set_split:Nnn \l_andrew_foreach_seq { #1 } { #2 } \seq_map_inline:Nn \l_andrew_foreach_seq { #3 } } \ExplSyntaxOff \newcommand\Macro[1]{% \lforeach{#1}{% \blankTF{##1}{Empty!}{$|$##1$|$.} } } \begin{document} \lforeach[-]{a-b--\texttt{c}}{% \blankTF{#1}{Empty!}{$|$#1$|$.} } \lforeach{, ,,4, 5 ,5}{% \blankTF{#1}{Empty!}{$|$#1$|$.} } \Macro{, ,,4, 5 ,5} \end{document} The current item in the cycle is denoted by #1 (which has to become ##1 in the definition of \Macro`).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680665731430054, "perplexity": 4374.945929827382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00310.warc.gz"}
http://tex.stackexchange.com/questions/106514/hyperlink-with-number-sign
# Hyperlink with # (number sign) \documentclass{article} \usepackage{hyperref} \begin{document} \href{http://de.wikipedia.org/wiki/Hybridorbital#Beispiele} {http://de.wikipedia.org/wiki/Hybridorbital#Beispiele} \end{document} which has a # in the link, however if I enter it as \# Beispiele it won't open the link in the PDF and only if I leave the section after # will it open the page. However, I want it to open at a specific section in the webpage, which is coded by the # part. - Welcome to TeX.sx! Please note that you can mark inline code with back ticks like code. Furthermore I made a minimal working example (MWE) of your code but adding a documentclass. –  Tobi Apr 2 '13 at 11:10 Do you really mean \\# Beispiele and not \# Beispiele (single backslash)? –  Tobi Apr 2 '13 at 11:11 You have to prefix the hash with a backslash: \href{http://de.wikipedia.org/wiki/Hybridorbital#Beispiele} {http://de.wikipedia.org/wiki/Hybridorbital\#Beispiele} Now it should work ;) - Note that the prefixed backslash is only necessary in the second argument which is the on to be typset by the TeX engin and therefore must respect TeX special symbols (which # is one of). The first argument is the verbatim link an though doesn’t need special cares on symbols. –  Tobi Apr 2 '13 at 11:05 I got it now! My mistake was that I wrote \# Beispiel instead of \#Beispiel (without space in between) because I thought it wouldn't recognise the second command. –  gasserv Apr 2 '13 at 11:11 use \url instead: \documentclass{article} \usepackage{hyperref} \begin{document} \url{http://de.wikipedia.org/wiki/Hybridorbital#Beispiele} \end{document} - Didn't think of that :) Note that \url typesets the url in typewriter font by default. –  ralfix Apr 2 '13 at 11:04 Just use \urlstyle{same} to change this behaviour. @Micha may like to add this to the answer. –  Tobi Apr 2 '13 at 11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745509624481201, "perplexity": 4124.546963070828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00156-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electric-circuit-to-solve-with-nodal-analysis.275198/
# Electric Circuit to solve with nodal analysis 1. Nov 26, 2008 Can someone solve this circuit Find v and i in the circuit. I know that we have to make somethng with supernode, but where the supernode is here? How to develop this problem? Please help me :) Btw it is nice forum, today i have found that. I hope I will be able to help you in other engeenering topics. Greets 2. Nov 26, 2008 ### Redbelly98 Staff Emeritus Welcome to PF! For homework problems, we like for students to: 1. Show any relevent equations or principles, and show your attempt at solving it before getting help. and 2. Post homework problems in the homework area: https://www.physicsforums.com/forumdisplay.php?f=152
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607749342918396, "perplexity": 2868.9500154179973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864943.28/warc/CC-MAIN-20180623054721-20180623074721-00423.warc.gz"}
http://wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=2003&number=831
WIAS Preprint No. 831, (2003) # Stochastic models and Monte Carlo algorithms for Boltzmann type equations Authors • Wagner, Wolfgang 2010 Mathematics Subject Classification • 65C05 76P05 82C80 Keywords • Boltzmann equation, stochastic models, Monte Carlo algorithms Abstract In this paper we are concerned with three typical aspects of the Monte Carlo approach. First there is a certain field of application, namely physical systems described by the Boltzmann equation. Then some class of stochastic models is introduced and its relation to the equation is studied using probability theory. Finally Monte Carlo algorithms based on those models are constructed. Here numerical issues like efficiency and error estimates are taken into account. In Section 1 we recall some basic facts from the kinetic theory of gases, introduce the Boltzmann equation and discuss some applications. Section 2 is devoted to the study of stochastic particle systems related to the Boltzmann equation. The main interest is in the convergence of the system (when the number of particles increases) to the solution of the equation in an appropriate sense. In Section 3 we introduce a modification of the standard direct simulation Monte Carlo'' method, which allows us to tackle the problem of variance reduction. Results of some numerical experiments are presented.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652292132377625, "perplexity": 303.7569933349164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583730728.68/warc/CC-MAIN-20190120184253-20190120210253-00385.warc.gz"}
http://logiciansdoitwithmodels.com/
February 24, 2013 It’s a fun story about the early search for mathematical foundations involving figures like Bertrand Russell, George Cantor, Ludwig Wittgenstein, Gottlob Frege, Kurt Gödel and Alan Turing. http://www.scribd.com/doc/98921232/Bertrand-Russell-Logicomix And for those interested, here’s a review of Logicomix by noted philosopher of mathematics, Paolo Mancosu: https://philosophy.berkeley.edu/file/509/logicomix-review-january-2010.pdf The Nonsense Math Effect December 28, 2012 An article suggesting that non-math academics are impressed by equations even if the equations are nonsense: http://journal.sjdm.org/12/12810/jdm12810.pdf Well Ordering Infinite Sets, the Axiom of Choice and the Continuum Hypothesis December 17, 2012 We’re trying to understand the relationship between $\aleph_{0}$, and other infinite cardinal numbers. We’ve reviewed two ways (here and here) to generate infinite cardinals greater than $\aleph_{0}$. The first way is by considering the set of all subsets of positive integers, the power set $\mathcal{P}(\mathbb{N})$ of $\mathbb{N}$ with cardinality $2^{\aleph_{0}}$. The second way is by considering the set of countably infinite ordinals, $\omega_{1}$ and its cardinality, $\aleph_{1}$. We begin by introducing the axiom of choice. The axiom of choice (AC) is an axiom of set theory that says, informally, that for any collection of bins, each containing at least one element, it’s possible to make a selection of at least one element from each bin. It was first set forth by Zermelo in 1904 and was controversial for a time (early 20th century) due to its being highly non-constructive. For example, AC is equivalent to the well ordering theorem (WOT) stating that every set can be well ordered, and it can be proven using AC/WOT that there are non-Lebesgue measurable sets of real numbers. Nevertheless, this result is consistent with the result that no such set of reals is definable! And use of the AC to achieve unintuitive mathematical results, like paradoxical decompositions in geometry in the spirit of the Banach-Tarski paradox, have also fueled skepticism and mistrust of the axiom.  But contemporary mathematicians make free use of the axiom and hardly mind that it was ever controversial. Formally, the axiom of choice says the following: For each family $(X_{i})_{i \in I}$ of non-empty sets $X_{i}$, the product set $\prod_{i \in I}{(X_{i})}$ is non-empty. The elements of $\prod_{i \in I}{(X_{i})}$ are actually “choice functions” $(x_{i})_{i \in I}$, $I \rightarrow \bigcup_{i \in I}{(x_{i})}$, satisfying $x(i) = x_{i}$ for each $i \in I$. AC is important when thinking about infinite cardinals in part because of its equivalence to the WOT. Because WOT says that every set can be well ordered, it follows that each cardinal (any set really) can be associated with an ordinal number and you can count them via the ordinals. For instance, $\aleph_{0}$ can be represented by $\omega$, $\aleph_{1}$ by $\omega_{1}$ and in general AC/WOT enables us to give the von Neumann definition of cardinal numbers where the cardinality of a set $X$ is the least ordinal $\alpha$ such that there is a bijection between $X$ and $\alpha$. AC and WOT do a lot of other work as well in terms of ordering infinities. We say that an $aleph$ is the cardinal number of a well ordered infinite set. Because every set is well order able, all infinite cardinals are $alephs$. It was shown by Friedrich Hartogs in 1915 that trichotomy, which is a property of an order relation on a set $A$ such that for any $x, y \in A$, $x < y, x = y$, or $x > y$, is equivalent to AC. So with AC we can list, in (a total) order, the infinite cardinals and compare them. Thus we can set up the following infinite list of $alephs$, $\aleph_{0}, \aleph_{1}, \aleph_{2}, \cdots, \aleph_{\alpha}, \cdots$. We know that $\aleph_{1}$ is greater than and distinct from $\aleph_{0}$. The cardinal $\aleph_{1}$ is the cardinality of the ordinal $\omega_{1}$ which is larger than all countable ordinals, so $\aleph_{1}$ is distinguished from $\aleph_{0}$. What about $2^{\aleph_{0}}$? It’s unclear where $2^{\aleph_{0}}$ fits in in the list $\aleph_{0}, \aleph_{1}, \aleph_{2}, \cdots, \aleph_{\alpha}, \cdots$. The infinite cardinal $2^{\aleph_{0}}$ is the cardinality of the set of subsets of $\mathbb{N}$, but it’s also the cardinality of the set, $\mathbb{R}$, of real numbers or the set of points on a line, known as the continuum. (This equality is provable using the Cantor-Schröder-Bernstein theorem and follows from the proof of the uncountability of $\mathbb{R}$.) George Cantor, the inventor of set theory, conjectured in 1878 that there is no set whose cardinality is strictly between the cardinality of the integers ($\aleph_{0}$) and the cardinality of the continuum ($2^{\aleph_{0}}$). Since we’re assuming AC, that means that $2^{\aleph_{0}} = \aleph_{1}$. This conjecture is famously known as the Continuum Hypothesis (CH) and was the first of 23 problems in David Hilbert’s famous 1900 list of open problems in mathematics. The problem of whether or not CH is true remains open to this day although Kurt Gödel (in 1940) and Paul Cohen (in 1963) showed that CH is independent of the axioms of Zermelo-Fraenkel set theory, if these axioms are consistent. There is much of philosophical interest surrounding the mathematics of the continuum hypothesis and I hope to be able to turn my attention to those topics in the future. Priest, Beall and Armour-Garb’s “The Law of Non-Contradiction” Available Online via Scribd November 5, 2012 Priest, Beall and Armour-Garb’s “The Law of Non-Contradiction”, (link below) is a great collection of essays on the philosophy and logic of dialetheism, the belief that that there are sentences A, such that both A and its negation, ¬A are true. Non-classical, paraconsistent logics may be necessary for formalizing and understanding physical and social systems. http://www.scribd.com/doc/62132941/3/Letters-to-Beall-and-Priest Paul Cohen Reflects on the Nature of Mathematics November 5, 2012 Reflections on the nature of mathematics from Paul Cohen’s “Comments on the foundations of set theory” in Scott and Jech, Axiomatic Set Theory, Vol.1, p. 15. A Mathematician’s Survival Guide, by Pete Casazza April 29, 2012 “All My Imaginary Friends Like Me” -Nikolas Bourbaki From §2 of “A Mathematician’s Survival Guide“, a good read for all, not just mathematical people. The Cardinality of Uncountable Sets II March 6, 2012 We can get at another infinite cardinal that is greater than $\aleph_{0}$ by thinking about the ordinal numbers. Think of the natural, or counting numbers $\textup{N} = (1, 2, 3, \cdots)$. These numbers double as the finite cardinal numbers. Cardinal numbers, we have seen, express the size of a set or the number of objects in a collection (e.g., as in “24 is the number of hours in a day”). But they also double as the finite ordinal numbers, which indicate a place in an ordering or in a sequence (e.g., as in ” the letter ‘x’ is the 24th letter in the English alphabet”). In the case of finite collections, the finite ordinal numbers are the same as the finite cardinals. But when we start thinking of infinite collections the similarities diverge. In order to see the differences in the infinite case we should get clear on what an ordinal number is. Say that a set $x$ is transitive if, and only if, every element of $x$ is a subset of $x$, where $x$ is not a urelement (something that is not a set). Now say that a set $x$ is well-ordered by the membership relation, $\in$, if $\{\langle y, z\rangle \in x \times x : y \in z\}$. What this does is simply order the elements of $x$ in terms of membership. We can do this type of thing with transitive sets. Combining these two definitions we get the definition of an ordinal number: a set $x$ is an ordinal if, and only if, $x$ is transitive and well-ordered by $\in$. Now, let $\omega = \{1, 2, 3, \cdots\}$ (i.e., the set $\textup{N}$ of natural numbers). $\omega$ is an ordinal because when we think of the natural numbers as constructed by letting $0 = 0$ and letting $1 = \{0\}$, the singleton set of $0$, $2 = \{0, \{0\}\}$, and so on and so forth it satisfies the definition above of an ordinal number as a transitive set well-ordered by $\in$. So we can set up the sequence $1, 2, 3, \cdots, \omega$ with all ordinals less than $\omega$ either equal to $0$ or one of its successors. Suppose that you take the natural numbers and re-arrange (re-order) them so that $0$ is the last element. This is weird because the regular ordering of the natural numbers has no last element. But still, you can think of there being a countable infinity of natural numbers $1, 2, 3, \cdots$ prior to the appearance of $0$. So we have the standard order of $\textup{N}$ and we have added another element, $0$. If we let $\omega$ be the standard order of $\textup{N}$, then we have just described $\omega + 1$. We can do the same thing by now setting $1$ as the last element of $\omega + 1$ and thus get $\omega + 2$. Note that addition (and multiplication below) does not commute; e.g., $1 + \omega = \omega$ and not $\omega + 1$. This process can be generalized (e.g., $n, n +1, \cdots, 0, 1, \cdots, n -1$) to get $\omega + n$. Again, doing something weird: take the natural numbers and put all the even numbers first, followed by the odd numbers. It’ll look like this, $0, 2, 4, \cdots, 2n, \cdots, 1, 2, 3, \cdots 2n+1, \cdots$. Here we have taken $\omega$ (the evens) and appended it to $\omega$ (the odds) so we have $\omega + \omega$. In each case, $\omega$, $\omega + n$ and $\omega + \omega$, we are dealing with the same cardinality, the cardinality of $\textup{N}$. We’ve created a variety of different ordinal numbers here, and, as they represent different orderings of the natural numbers, they are all countable. There are many more countably infinite ordinals. For example, the ordinal $\omega \cdot 2 = \{ 0, 1, 2, \cdots, \omega, \cdots, \omega+1, \omega+2, \cdots \}, \omega \cdot 2+1, \cdots \omega^{2}, \omega^{3}, \omega^{4}, \cdots \omega^{\omega}, \cdots \omega^{\omega^{\omega}} \cdots$. Taking the countable ordinals and laying them out (kind of like in the previous sentence but starting with $1, 2, 3, \cdots, \omega, \cdots, \omega + 1, \omega + 2, \cdots$) we end up with a a set that is itself an ordinal. In order to see this let $\alpha$ be the set of countable ordinals. If $\beta \in \alpha$ then $\beta \subset \alpha$ since the members of $\beta$ are countable ordinals. Therefore $\alpha$ is an ordinal. It is in fact the first uncountable ordinal because if it were countable, then $\alpha$ would be a member of itself and there would be an infinitely descending sequence of ordinals. But because the ordinals are transitive sets (see definition above), this cannot be the case. So the set of countable ordinals is uncountable. (It is also the smallest such set because the ordinals are well ordered by $\in$, so every ordinal in $\alpha$ is a member of $\alpha$ and countable.) This uncountable set goes by the name $\omega_{1}$. Here we see how the similarities between the ordinal numbers and the cardinal numbers in the finite case diverge in the infinite case. Whereas there is only one countably infinite cardinal, $\aleph_{0}$, there are uncountably many countably infinite ordinals, namely all countably infinite ordinals less than $\omega_{1}$. It is natural to wonder about the cardinality of the set $\omega_{1}$ of countable ordinals. Its cardinality is transfinite and is denoted by the uncountable cardinal number, $\aleph_{1}$. So far we’ve talked about $\aleph_{0}, 2^{\aleph_{0}}$ (see here, and here, respectively), and have generated $\aleph_{1}$ by considering the uncountable set of countably infinite ordinals, $\omega_{1}$. In the next update we’ll talk more about the relationship between these cardinal numbers as well as the celebrated axiom of choice. Cantor’s Attic January 15, 2012 Cantor’s Attic: a resource on mathematical infinity. Philosophically rich mathematics is the best mathematics. FOM Posting on A. Kiselev’s claim that “There are no weakly inaccessible cardinals” in ZF August 17, 2011 Alex Kiselev claims to have shown “There are no weakly inaccessible cardinals” in Zermelo-Fraenkel set theory (ZF).  This would have the consequence that strongly inaccessible cardinals don’t exist either and so on for all the other large cardinals.  Martin Davis on the FOM list cautions that the claim is “highly dubious”. Here are links to Kiselev’s papers: Link to the FOM list entry: http://www.cs.nyu.edu/pipermail/fom/2011-August/015694.html The Cardinality of Uncountable Sets I August 11, 2011 We saw earlier that $\aleph_{0}$ can accommodate a countable infinity of countable infinities. Now we’re going to produce an infinite cardinal number that is bigger than $\aleph_{0}$. Cardinal numbers express the size of, or number of elements of a set. Countably infinite sets have cardinality $\aleph_{0}$.  We know that countably infinite sets can be matched up (by way of a one-to-one and onto function) to the set of positive integers. So if we can find a cardinal number greater than $\aleph_{0}$, we can find a set greater than the set of positive integers, or an uncountably infinite set. We’re going to begin with an infinite list of subsets of the set of positive integers:$S_{1}, S_{2}, S_{3} \cdots$  And we’re going to represent each of these sets by a function of positive integers: $s_{n}(x)= \begin{cases} 1, &\text{if } x \in S_{n};\\ 0, &\text{if } x\notin S_{n}. \end{cases}$ For example, if the third set in our list, $S_{3}$ is the set of positive even integers, then the values of the function $s_{3}(x)$ are $s_{3}(1) = 0, s_{3}(2) = 1, s_{3}(3) = 0 \cdots$ And if the fourth set in our list, $S_{4}$ is the set of squares, then the values of the function $s_{4}(x)$ are $s_{4}(1) = 1, s_{4}(2) = 0, s_{4}(3) = 0 \cdots$ and so on. Imagine now that we set up an infinite table like this.  The top row will be our header row, or our x-axis and will contain, in the first position the label “Sets of positive integers”.  To the right of this label we list out the positive integers, $1, 2, 3, \cdots$  These are our columns.  Immediately below the label “Sets of positive integers” we list the names of the each subset of positive integers, $S_{1}, S_{2}, S_{3} \cdots$ This is our y-axis and extends infinitely downwards.  We now fill out the values of each coordinate using the values of the functions $s_{1}, s_{2}, s_{3} \cdots$ extending to the right of each set name on the y-axis.  Basically the rows of the infinite table contain the 0-1 representation of each of the sets. It looks like this: You may have noticed that the diagonal values of the table are in bold.  The bold values form a sequence, $s_{1}(1), s_{2}(2), s_{3}(3), \cdots$ called the diagonal sequence.  We’ll return to the diagonal sequence in a moment.  But first we want to ask: does our list (along the y-axis) contain all of the sets (subsets) of positive integers? It doesn’t if we can always produce a set different from each of the sets on the list. Here is where the diagonal comes in.  The diagonal sequence is just a sequence of 0′s and 1′s and and may very well encode a set of positive integers appearing in our list. But we’re trying to find a set that does not appear on the list.  It’s easy to find one if we think of a sequence that contains positive integers that do not appear in the diagonal sequence.  So we can take the diagonal sequence and create the antidiagonal by changing 1′s to 0′s and 0′s to 1′s in the the diagonal sequence.  So let the antidiagonal be given by subtracting each element of the diagonal sequence from 1.  The antidiagonal is $1 - s_{1}(1), 1 - s_{2}(2), 1 - s_{3}(3), \dots$ and does not appear anywhere as a row in our table. But suppose that the antidiagonal did appear as, say, the kth row in our table, thus representing the kth subset of positive integers. It would look like this: $s_{k}(1) = 1 - s_{1}(1), s_{k}(2) = 1 - s_{2}(2), s_{k}(3) = 1 - s_{3}(3), \cdots s_{k}(k) = 1 - s_{k}(k), \cdots$  But $s_{k}(k) = 1 - s_{k}(k)$ can never obtain because $s_{k}(k)$ has to either be 0 or 1.  If it’s a 0, then we have $0 = 1 - 0 = 1$.  But if it’s a 1 then we have $1 = 1 - 1 = 0$.  In either case, it is absurd, so the antidiagonal, whatever it may be, must be different from any set appearing in the list $S_{1}, S_{2}, S_{3} \cdots$ of subsets of positive integers.  If we take the antidiagonal and append it to our list of subsets of positive integers all we have to do is repeat the argument and we will end up with another distinct antidiagonal sequence that does not appear on our list. The set of all subsets of the positive integers is called the power set of the set of positive integers.  If the set of positive integers is denoted by $\textup{N}$ then the power set of $\textup{N}$ is $\mathcal{P}(\textup{N})$. What can we say about the cardinality of $\mathcal{P}(\textup{N})$?  Well, for any one of the sequences given by our list of subsets of positive integers, the first digit can be either 0 or 1, and the same is true for the second and third digits, and so on for all $\aleph_{0}$ digits in the sequence.  This means that there are $2 \times 2 \times 2 \times \cdots$ (for $\aleph_{0}$ factors) possible sequences of 0′s and 1′s and so there are $2^{\aleph_{0}}$ sets of positive integers.  And we have just shown by means of the antidiagonal that $2^{\aleph_{0}} > \aleph_{0}$. So $2^{\aleph_{0}}$ is an infinite cardinal number that is greater than $\aleph_{0}$ and the power set, $\mathcal{P}(\textup{N})$, of the positive integers is an uncountably infinite set. In the next update we’ll produce anothe cardinal number, $\aleph_{1}$ greater than $\aleph_{0}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 150, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265945553779602, "perplexity": 222.93093512093967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383218/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz"}
https://sunlimingbit.wordpress.com/category/pde/elliptic-pde/
## Category Archives: Elliptic PDE 32519989 ### Bach flat four dimensional manifold and sigma2 functional We want to find the necessary condition of being the critical points of ${\int\sigma_2}$ on four dimensional manifold. 1. Preliminary Suppose ${(M^n,g)}$ is a Riemannian manifold with ${n=4}$. ${P_g}$ is the Schouten tensor $\displaystyle P_g=\frac{1}{n-2}\left(Ric-\frac{R}{2(n-1)}g\right)$ and denote ${J=\text{\,Tr\,} P_g}$. Define $\displaystyle \sigma_2(g)=\frac{1}{2}[(\text{\,Tr\,} P_g)^2-|P_g|_g^2]$ $\displaystyle I_2(g)=\int_M \sigma_2(g)d\mu_g$ where ${|P|_g^2=\langle P,P\rangle_g}$. It is well known that ${I_2(g)}$ is conformally invariant. Suppose ${g(t)=g+th}$ where ${h}$ is a symmetric 2-tensor. We want to calculate the first derivative of ${I_2(g(t))}$ at ${t=0}$. To that end, let us list some basic facts (see the book of Toppings). Firstly denote ${(\delta h)_j=-\nabla^i{h_{ij}}}$ the divergence operator and $\displaystyle G(h)=h-\frac{1}{2}(\text{\,Tr\,} h) g$ $\displaystyle (\Delta_L h)_{ij}=(\Delta h)_{ij}-h_{ik}Ric_{jl}g^{kl}-h_{jk}Ric_{il}g^{kl}+2R_{ikjl}h^{kl}$ where ${\Delta_L}$ is the Lichnerowicz Laplacian. Then the first variation of Ricci curvature and scalar curvature are $\displaystyle \dot{R}=\delta^2h-\Delta(\text{\,Tr\,} h)-\langle h,Ric\rangle \ \ \ \ \ (1)$ $\displaystyle \dot{Ric}=-\frac{1}{2}\Delta_Lh-\frac{1}{2}L_{(\delta G(h))^\sharp}g=-\frac{1}{2}\Delta_Lh-\frac{1}{2}L_{(\delta h)^\sharp}g-\frac{1}{2}Hess(\text{\,Tr\,} h)$ $\displaystyle =-\frac{1}{2}\Delta_Lh-d(\delta h)-\frac{1}{2}Hess(\text{\,Tr\,} h)$ where we were using upper dot to denote the derivative with respect to ${t}$. 2. First variation of the sigma2 functional Lemma 1 ${(M^4,g)}$ is a critical point of ${I_2(g)}$ if and only if $\displaystyle \Delta P-Hess(J)+2\mathring{Rm}(P)-2JP-|P|_g^2g=0 \ \ \ \ \ (2)$ where ${(\mathring{Rm}(P))_{ij}=R_{ikjl}P^{kl}}$. Proof: $\displaystyle \frac{d}{dt}\big|_{t=0} I_2(g(t))=\int_M J\dot J-\langle\dot P,P\rangle+\langle h,P\wedge P\rangle+\frac{1}{2}\sigma_2\text{\,Tr\,} h \,d\mu_g$ where ${(P\wedge P)_{ij}=P_{ik}P_{jl}g^{kl}}$. Since we have $\displaystyle \int_M\langle P,\dot P\rangle\\ =\frac{1}{n-2}\int_M\langle P, \dot Ric-\dot Jg-Jh\rangle =\frac{1}{n-2}[\langle P, \dot Ric\rangle-\dot J J-J\langle h,P\rangle]$ $\displaystyle =\frac{1}{n-2}[-\frac{1}{2}\langle h,\Delta_L P\rangle+\langle h,Hess(J)\rangle-\frac{1}{2}\Delta J \text{\,Tr\,} h-\dot J J-J\langle h,P\rangle]$ Plugging this into the derivative of ${I_2}$ to get $\displaystyle (n-2)\frac{d}{dt}\big|_{t=0} I_2(g(t))\\ =\int_M\frac{1}{2}\langle h,\Delta_L P\rangle-\langle h,Hess(J)\rangle+\frac{1}{2}\Delta J \text{\,Tr\,} h\\$ $\displaystyle \quad +(n-1)\dot J J+J\langle h,P\rangle+(n-2)\langle h,P\wedge P\rangle+\frac{n-2}{2}\sigma_2\text{\,Tr\,} h d\mu_g$ In order to simplify the above equation, we recall the definition of Lichnerowicz Laplacian ${\Delta_L}$ $\displaystyle (\Delta_LP)_{ij}=(\Delta P)_{ij}-2P_{ik}Ric_{jl}g^{kl}+2R_{ikjl}P^{kl}$ $\displaystyle =(\Delta P)_{ij}-2(n-2)P_{ik}P_{jl}g^{kl}-2JP_{ij}+2R_{ikjl}P^{kl}$ Apply (1) to get $(n-1)\dot J J=\frac12J\dot R=\frac12J[\delta^2h-\Delta(\text{\,Tr\,} h)-\langle h,Ric\rangle]$ $=\frac{1}{2}[\langle h, Hess(J)\rangle-\text{\,Tr\,} h\Delta J-(n-2)J\langle h, P\rangle-J^2\text{\,Tr\,} h]$ Therefore we can simplify it to be $\displaystyle (n-2)\frac{d}{dt}\big|_{t=0} I_2(g(t)) =\int_M\frac{1}{2}\langle h,\Delta P\rangle-\frac{1}{2}\langle h, Hess(J)\rangle+h^{ij}R_{ikjl}P^{kl}$ $\displaystyle -\frac{n-2}{2}J\langle h,P\rangle-\frac{1}{2} J^2 \text{\,Tr\,} h+\frac{n-2}{2}\sigma_2\text{\,Tr\,} h d\mu_g$ Let us denote ${(\mathring{Rm}(P))_{ij}=R_{ikjl}P^{kl}}$. Using the fact ${n=4}$ and the definition of ${\sigma_2}$, $\displaystyle \frac{d}{dt}\big|_{t=0} I_2(g(t))=\frac{1}{4}\int_M\langle h,Q\rangle d\mu_g$ where $\displaystyle Q=\Delta P-Hess(J)+2\mathring{Rm}(P)-2JP-|P|_g^2g$ $\Box$ Remark 1 It is easy to verify ${\text{\,Tr\,} Q=0}$, this is equivalent to say ${I_2}$ is invariant under conformal change. More precisely, letting ${h=2ug}$, then $\displaystyle \frac{d}{dt}\big|_{t=0} I_2(g(t))=\frac{1}{4}\int_M\langle h,Q\rangle d\mu_g=\frac{1}{2}\int_M u\text{\,Tr\,} Q d\mu_g=0.$ Remark 2 If ${g}$ is an Einstein metric with ${Ric=2(n-1)\lambda g}$, then ${P=\lambda g}$, ${J=n\lambda}$ and $\displaystyle \mathring{Rm}(P)=\lambda Ric=2(n-1)\lambda^2 g$ It is easy to verify that ${Q=0}$. In other words, Einstein metrics are critical points of ${I_2}$. Are there any non Einstein metric which are critical points of ${I_2}$? Here is one example. Suppose ${M=\mathbb{S}^2\times N}$, where ${\mathbb{S}^2}$ is the sphere with standard round metric and ${(N,g_N)}$ is a two dimensional compact manifolds with sectional curvature ${-1}$. ${M}$ is endowed with the product metric. We can prove ${Ric=g_{S^2}-g_N}$, ${P=\frac{1}{2}g_{S^2}-\frac{1}{2}g_N}$, ${J=0}$, ${\mathring{Rm}(P)=g_{prod}}$ and consequently ${Q=0}$. Note that the above example is a locally conformally flat manifold. For this type of manifold, we have the following lemma which can say Lemma 2 Suppose ${g}$ is locally conformally flat and ${Q=0}$, then Proof: When ${g}$ is locally conformally flat, $\displaystyle \mathring{Rm}(g)=JP+|P|_g^2g-2P\wedge P$ ${Q=0}$ is equivalent to $\displaystyle \Delta P-Hess(J)+|P|_g^2g-4P\wedge P=0$ Actually this is equivalent to the Bach tensor ${B}$ is zero. $\Box$ 3. Another point of view We have the Euler Characteristic formula for four dimensional manifolds $\displaystyle 8\pi^2\chi(M)=\int_M (|W|_g^2+\sigma_2) d\mu_g$ therefore the critical points for ${\int_M \sigma_2d\mu_g}$ will be the same as the critical points of ${\int_M |W|_g^2d\mu_g}$. However, the functional $\displaystyle g\rightarrow \int_M |W|_g^2d\mu_g$ is well studied by Bach. The critical points of this functional satisfy Bach tensor equal to 0. $\displaystyle B_{ij}=\nabla^k\nabla^l W_{likj}+\frac{1}{2}Ric^{kl}W_{likj}$ Obviously, ${B=0}$ for Einstein metric, but not all Bach flat metrics are Einstein. For example ${B=0}$ for any locally conformally flat manifolds. ### Unique continuation property on the boundary I am writing a theorem proved by Jin Zhiren in his thesis. Suppose $\Omega$ is a smooth domain in $\mathbb{R}^n$, $x_0\in \partial \Omega$ and $u$ is a harmonic function in $\Omega$. If there exists $A, b>0$ such that $\displaystyle |u(x)|\leq Ae^{-\frac{b}{|x-x_0|}}\quad x\in \Omega$ for $|x-x_0|$ small, then $u=0$. If $n=2$, the same conclusion holds for the solutions of a general second order linear elliptic equation. A borderline example for this theorem is $u$ be the real part of $e^{-1/z^\alpha}$, $\alpha\in (0,1)$. $u$ is harmonic in the right half plane and $u\leq Ae^{-1/|x|^\alpha}$ and consequently $D^\beta u(0)=0$. ### Interior estimate for Monge Ampere equation Suppose we have $u$ is a generalized solution of the Monge-Ampere equation $\det(\nabla^2 u)=1 \text{ in } B_1\subset\mathbb{R}^n$ when $n=2$, Heinz proved $|\nabla^2 u|_{B_{1/2}}\leq \sup_{B_1}u$ when $n\geq 3$, Pogorelov has a counter example. One can have a solution $u\in C^1(B_1)$, but $u\in C^{1,\beta}(B_1)$ for some $\beta\in (0,1)$. See his book The Minkowski Multidimensional Problem, on page 83. ### Newton tensor Suppose ${A:V\rightarrow V}$ is a symmetric endomorphism of vector space ${V}$, ${\sigma_k}$ is the ${k-}$th elementary symmetric function of the eigenvalue of ${A}$. Then $\displaystyle \det(A+tI)=\sum_{k=0}^n \sigma_k t^{n-k}$ One can define the ${k-}$th Newton transformation as the following $\displaystyle \det(A+tI)(A+tI)^{-1}=\sum_{k=0}^{n-1}T_k(A)t^{n-k-1}$ This means $\displaystyle \det(A+tI)=\sum_{k=0}^{n-1}T_k(A+tI)t^{n-k-1}$ $\displaystyle =T_0 t^n+\sum_{k=0}^{n-2}(A\cdot T_k(A)+T_{k+1}(A))t^{n-k-1}+T_{n-1}(A)$ By comparing coefficients of ${t}$, we get the relations of ${T_k}$ $\displaystyle T_0=1,\quad A\cdot T_k(A)+T_{k+1}(A)=\sigma_{k+1}I,\, 0\leq k\leq n-2\quad T_{n-1}(A)=\sigma_n$ Induction shows $\displaystyle T_{k}(A)=\sigma_kI-\sigma_{k-1}A+\cdots+(-1)^kA^k$ For example $\displaystyle T_1(A)=\sigma_1I-A$ $\displaystyle T_2(A)=\sigma_2-\sigma_1A+A^2$ One of the important property of Newton transformation is that: Suppose ${F(A)=\sigma_k(A)}$, then $\displaystyle F^{ij}=\frac{\partial F}{\partial A_{ij}}=T_{k-1}^{ij}(A)$ The is because $\displaystyle \frac{\partial }{\partial A_{ij}}\det(A+tI)=\det(A+tI)((A+tI)^{-1})_{ij}.$ If ${A\in \Gamma_k}$, then ${T_{k-1}(A)}$ is positive definite and therefore ${F}$ is elliptic. Remark: Hu, Z., Li, H. and Simon, U. . Schouten curvature functions on locally conformally flat Riemannian manifolds. Journal of Geometry, 88(1${-}$2), (2008), 75${-}$100. ### Some calculations of sigma_2 On four-manifold ${(M^4,g_0)}$, we define Shouten tensor $\displaystyle A = Ric-\frac 16 Rg$ and Einstein tensor and gravitational tensor $\displaystyle E=Ric - \frac 14 Rg\quad S=-Ric+\frac{1}{2}Rg$ Suppose ${\sigma_2}$ is the elemantary symmetric function $\displaystyle \sigma_2(\lambda)=\sum_{i Thinking of ${A}$ as a tensor of type ${(1,1)}$. ${\sigma_2(A)}$ is defined as ${\sigma_2}$ applied to eigenvalues of ${A}$. Then $\displaystyle \sigma_2(A)= \frac{1}{2}[(tr_g A)^2-\langle A, A\rangle_g] \ \ \ \ \ (1)$ Notice ${A=E+\frac{1}{12}Rg}$. Easy calculation reveals that $\displaystyle \sigma_2(A)=-\frac{1}{2}|E|^2+\frac{1}{24}R^2 \ \ \ \ \ (2)$ Under conformal change of metric ${g=e^{2w}g_0}$, we have $\displaystyle R= e^{-2w}(R_0-6\Delta_0 w-6|\nabla_0 w|^2) \ \ \ \ \ (3)$ $\displaystyle A=A_0-2\nabla^2_0 w+2dw\otimes dw-|\nabla_0w|^2g_0 \ \ \ \ \ (4)$ $\displaystyle S=S_0+2\nabla_0^2w-2\Delta_0wg_0-2dw\otimes dw-|\nabla_0 w|^2g_0 \ \ \ \ \ (5)$ We want to solve the equation ${\sigma_2(A)=f>0}$, which is equivalent to solve $\displaystyle \sigma_2(A_0-2\nabla^2_0w+2dw\otimes dw-|\nabla_0w|^2g_0)=f$ This is an fully nonlinear equation of Monge-Ampere type. Under local coordinates, the above equation can be treated as $\displaystyle F(\partial_i\partial_j w,\partial_kw,w,x)=f$ where ${F(p_{ij},v_k,s,x):\mathbb{R}^{n\times n}\times\mathbb{R}^n\times\mathbb{R}\times\mathbb{R}^n\rightarrow \mathbb{R}}$. This equation is elliptic if the matrix ${\left(\frac{\partial F}{\partial p_{ij}}\right)}$ is positive definite. In order to find that matrix, we need the linearized operator $\displaystyle L[\phi]=\frac{\partial F}{\partial p_{ij}}(\nabla_0^2\phi)_{ij}=\frac{d}{dt}|_{t=0}F(\partial_i\partial_j w+t\partial_i\partial_j\phi,\partial_kw,w,x) \ \ \ \ \ (6)$ Using the elementary identity $\displaystyle \frac{d}{dt}\rvert_{t=0}\sigma_2(H+tG)=tr_gH\cdot tr_gG-\langle H, G\rangle_g. \ \ \ \ \ (7)$ for any fixed matrix ${H}$ and ${G}$. Now plug in ${H=A}$ is Schouten tensor and ${B=-2\nabla_0^2\phi}$. One can calculate them as $\displaystyle tr_g H\cdot tr_g G=\langle \frac{1}{3}Rg, G\rangle_g \ \ \ \ \ (8)$ Then we get $\displaystyle L[\phi]=\langle S,G\rangle_g=-2\langle S,\nabla^2_0\phi\rangle_g$ ### f-extremal disk In the last nonlinear analysis seminar, Professor Espinar talked about the overdetermined elliptic problem(OEP) which looks like the following $\Delta u+f(u)=0\quad\text{ in }\Omega$ $u>0\quad \text{ in }\Omega$ $u=0 \quad \text{on }\partial \Omega$ $\frac{\partial u}{\partial\eta}=cst\quad\text{on }\partial \Omega$ There is a BCN conjecture related to this BCN: If $f$ is Lipschitz, $\Omega\subset \mathbb{R}^n$ is a smooth(in fact, Lipschitz) connected domain with $\mathbb{R}^n\backslash\Omega$ connected where OEP admits a bounded solution, then $\Omega$ must be either a ball, a half space, a generalized cylinder or the complement of one of them. BCN is false in $n\geq 3$. Epsinar wih Mazet proved BCN when $n=2$. This implies the Shiffer conjecture in dimension 2. In higher dimension of Shiffer conjecture, if we know the domain is contained in one hemisphere of $\mathbb{S}^n$, then one can use the equator or the great circle to perform the moving plane. ### Compensated compactness Suppose $T$ is a vector field and $\nabla\cdot T = 0$. $E= \nabla \psi$ and $\psi$ is a scalar function. We have following theorem(Coifman-Lions-Meyers-Semmes) Theorem: If $T\in L^2(\mathbb{R}^n)$ and $T\in L^2(\mathbb{R}^n)$, then $E\cdot T\in \mathcal{H}^1(\mathbb{R}^n)$, which is the hardy space. Given $f(x)\in L^1(\mathbb{R}^n)$, it has harmonic extension $\mathbb{R}^{n+1}_+=\{(x,t)|x\in\mathbb{R}^n, t>0\}$ $\tilde{f}(x,t)=c_n\int_{\mathbb{R}^n}\frac{ f(x-y)t}{(t^2+|x|^2)^{\frac{n+1}{2}}}dy$ Definition: the non-tangential maximal function $N(f)=\sup_{(\xi,t)\in \Gamma(x)}|\tilde f(\xi, t)|$ It is easy to prove that $N(f)\leq c_n f^*(x)$ the Hardy-Littlewood maximal function. From this we can Hardy norm as $||f||_{\mathcal{H}^1}=||f||_{L^1}+||N(f)||_{L^1}$ Hardy space consists of all $f$ having finite hardy norm. There is well know fact that the dual space of $\mathcal{H}^1$ is BMO, which is defined as the following. Define $f\in L^1_{loc}(\mathbb{R}^n)$, if for any cube $Q$, $\sup_Q\frac{1}{|Q|}\int_Q|f-f_Q|<\infty,\quad \text{where }f_Q=\frac{1}{|Q|}\int_Qf$ then $f\in BMO$. $L^\infty \subset BMO$ and $\log|x|\in BMO$ but not in $L^\infty$. Let us see how do we use the main theorem. Suppose on $\mathbb{R}^2$, $u$ is the solution of the following elliptic equation $\displaystyle\frac{\partial}{\partial x_i}\left(a_{ij}(x)\frac{\partial u}{\partial x_j}\right)=\frac{\partial f}{\partial x_1}\frac{\partial g}{\partial x_2}-\frac{\partial f}{\partial x_2}\frac{\partial f}{\partial x_2}$ where $||\nabla f||_{L^2}<\infty$, $||\nabla g||_{L^2}<\infty$ and $(a_{ij})$ is uniform elliptic. YanYan Li and Sagun Chanillo proved that the green function of this elliptic operator belongs to BMO. The right hand side of this equation can be rewritten as $T\cdot E$, where $T=\left(\frac{\partial f}{\partial x_2}, -\frac{\partial f}{\partial x_1}\right),\quad E=\left(\frac{\partial g}{\partial x_1},\frac{\partial g}{\partial x_a}\right)$ therefore the right hand side belong to $\mathcal{H}^1$. Since $u(x)=\int G_x(y)T\cdot E(y)dy$ therefore from the theorem we stated at the beginning, we get $||u||_\infty\leq C||\nabla f||_{L^2}||\nabla g||_{L^2}$ ### Subcriticality and supercriticality Consider the equation $\displaystyle \Delta u=u^p\text{ on }\mathbb{R}^n$ usually we call the equation is subcritical when ${p<\frac{n+2}{n-2}}$, supercritical when ${p>\frac{n+2}{n-2}}$. The reason comes from the scalling the solution. Suppose ${u(x)}$ is a solution of the equation, then ${u^\lambda(x)=\lambda^{\frac{2}{p-1}}u(\lambda x)}$ is another solution. Consider the energy possessed by ${u}$ around any point ${x_0}$ of radius ${{\lambda}}$ can be bounded $\displaystyle \int_{B_{\lambda}(x_0)}|\nabla u(x)|^2dx\leq E$ when ${\lambda\rightarrow 0}$, we scale ${B_\lambda(x_0)}$ to ${B_1(x_0)}$, then ${u}$ will become ${u^\lambda}$ in order to be a solution and ${u^\lambda}$ lives on ${B_1(x_0)}$. While the energy will be $\displaystyle \int_{B_{1}(x_0)}|\nabla u^\lambda(x)|^2dx=\lambda^{\frac{4}{p-1}+2-n}\int_{B_{\lambda}(x_0)}|\nabla u(y)|^2dy$ If the ${\delta=\frac{4}{p-1}+2-n<0}$, which is ${p> \frac{n+2}{n-2}}$, the energy bound of ${u^\lambda}$ will become ${\lambda^\delta E}$. Since ${\lambda\rightarrow 0}$, the bound deteriorates by ‘zooming in’. In this case, we call the equation is supercritical. The solution looks more singular at this time. Remark: The energy should include ${\int_{B_{\lambda}(x_0)}u^2dx}$, but somehow this term scale differently with ${\int_{B_{\lambda}(x_0)}|\nabla u(x)|^2dx}$ and can not give one the critical exponent exactly. ### Approriate scalling in Yamabe equation Suppose ${(M,g)}$ is a Riemannian manifold, and ${L_g=\Delta_g -\frac{n-2}{4(n-1)}R_g}$ is the conformal Laplacian. Assume ${u>0}$ satisfies $\displaystyle L_gu+Ku^p=0$ where ${K}$ is some fixed constant, ${1. Suppose near a point ${x_0\in M}$, there is a coordinates ${x^1,x^2,\cdots, x^n}$. We want to scale the coordinates to ${x^i=\lambda y^i}$, $\displaystyle g(x)=g_{ij}(x)dx^idx^j=\lambda^2 g_{ij}(\lambda y)dy^idy^j=\lambda^2 \hat{g}(y)$ By the conformal invariance of ${L}$, for any ${\phi}$, we get $\displaystyle L_{g}(\lambda^{-\frac{n-2}{2}}\phi)=\lambda^{-\frac{n+2}{2}}L_{\hat{g}}(\phi)$ We want to choose ${\phi(y)=\lambda^{\alpha}u(\lambda y)}$ such that $\displaystyle L_{\hat{g}}(\phi)+K\phi^p=0$ which means $\displaystyle L_{\hat{g}}(\lambda^{\alpha}u(\lambda y))=\lambda^{\frac{n+2}{2}}L_g (\lambda^{\alpha-\frac{n-2}{2}}u(\lambda y))=-K\lambda^{\alpha+2} (u(\lambda y))^p$ Letting $\displaystyle \alpha+2=\alpha p$ we get ${\alpha=\frac{2}{p-1}}$. The above proof may not be right. Or we should look it more directly $\displaystyle L_g(u(x))=\lambda^2 L_{\hat{g}}(u(\lambda y))=\lambda^2 Ku(\lambda y)^p$ then $\displaystyle L_{\hat{g}}(\lambda^\alpha u(\lambda y))=K(\lambda^\alpha u(\lambda y))^p$ with ${\alpha=\frac{2}{p-1}}$. ### Harnack inequality under scaling Thm: Suppose ${\Omega}$ is domain in ${\mathbb{R}^n}$. ${u}$ is a harmonic function in ${\Omega}$, ${u\geq 0}$. For any subdomain ${\Omega'\subset\subset \Omega}$, there exists a constant ${C}$ such that $\displaystyle \sup_{\Omega'}u\leq C(n,\Omega,\Omega')\inf_{\Omega'}u$ To prove this Harnack inequality, there is an intermediate step $\displaystyle \sup_{B_R}u\leq 3^n\inf_{B_R}u$ whenever ${B_{4R}\subset\Omega}$. Here the constant is independent of ${R}$, actually the constant ${3^n}$ is not so important. We can use the above Thm to give another proof. Suppose ${v(x)=u(Rx)}$ for ${x\in B_1}$. Let ${\Omega=B_2}$ and ${\Omega'=B_1}$, applying the thm $\displaystyle \sup_{B_1}v\leq C(n)\inf_{B_1}v$ convert back to ${u}$ $\displaystyle \sup_{B_R}u\leq C(n)\inf_{B_R}u$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 269, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969982504844666, "perplexity": 944.3242364473734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823478.54/warc/CC-MAIN-20171019212946-20171019232946-00596.warc.gz"}
https://hotinfonow.com/physicists-have-just-discovered-a-very-strange-particle-that-is-not-a-particle-at-all/
Breaking News Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Science https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Physicists have just discovered a very strange particle that is not a particle at all # Physicists have just discovered a very strange particle that is not a particle at all Sounds like the beginning of a very bad physics riddle: I'm a particle that is not really; I disappear before I can be discovered, but it can still be seen. I interrupt your understanding of physics, but I do not review your knowledge. Who am I? This is Odderon, a particle that is even stranger than its name suggests, and it could have been discovered in the Large Hadron Accelerator, the most powerful atom that breaks particles close to the speed of light. 17-mile (27 km) ring near Geneva, Switzerland First, the dentron is not really a particle. What we think of particles is usually very stable: electrons, protons, quarks, neutrinos, and so on. You can hold them in your hand and carry them with you. Hell, your hand is literally made by them. And your hand does not disappear very soon, so we can probably assume that its main particles are in the long run. [7 Strange Facts About Quarks] There are other particles that do not last long, but are still called particles. Despite their short life, they remain particles. They are free, independent and capable of living independently, apart from any interaction ̵ 1; these are the hallmarks of a real particle. And then there is the so-called quasi-part, which is just one step above and is not a-particles in everything. Quasi-particles are not just particles, but they are not a fiction. Just … complicated. [The 18 Biggest Unsolved Mysteries in Physics] As in, literally complicated. In particular, particle interactions at ultra-high velocities are complicated. When two protons break at almost the speed of light, it's not like two billiard balls that break apart together. This is more like two drops of jellyfish that swing to each other, pulling out of their guts, and rearranging everything before returning to the jump. In all this complicated confusion, sometimes strange patterns appear. Small particles enter and go out of existence in an instant of an eye, just to be followed by another minor particle – and another. Sometimes these flashes of particles appear in a certain sequence or pattern. Sometimes, it does not even blink particles, just vibrations in the soup from the mixture of collision – vibrations that suggest the presence of a transient particle. Physicists here face a mathematical dilemma. They may either try to fully describe the complex mess that leads to these effervescent models, or they can pretend – purely for convenience – that these patterns are "particles" by themselves but with strange properties such as negative masses and spins that change over time. [5 Seriously Mind-Boggling Math Facts] Physicists choose the last option and thus a quasi-part is born. Quasi-particles are short, effervescent models or waves of energy that appear in the midst of collision with high-energy particles. But as much work is needed to fully describe this situation in mathematics, physicists take some shortcuts and pretend that these patterns are their own particles. This is done only to make math easier to manage. So, quasi particles are treated as particles, although they are definitely not. It's like pretending your uncle's jokes are really fun. It is quasi-bound for convenience only. A particular type of quasi-part is called the oderon, which is supposed to exist in the 1970s. It is believed that this occurs when an odd number of quarks – the heavy particles that are the building blocks of matter – briefly light up and disappear during the collisions of proton and antiproton. If the oderones are present in this collapsing scenario, there will be little cross-sectional difference (physical jargon of how easily one particle strikes another) of collisions between particles with themselves and their antiparticles. So, if we put a bunch of protons together, for example, we can calculate a cross section for this interaction. Then we can repeat this exercise for proton-antiproton collisions. In a world without hypersons these two cross sections must be identical. But the echoes change the picture – these short patterns, which we call the oderones, appear more favorably in particle-particles than collisions with antiparticle antiparticles that will slightly modify cross-sections. , so you will need many events or collisions before you can claim a discovery. Now, only if we had a giant particle accelerator that regularly breaks protons and antiprotons together and did so with such high energies and so often we can get reliable statistics. Oh, right: Great Hadron Accelerator. In a recent article published on March 26 at the arXiv prescription server, TOTEM Collaboration (in the lively jargon acronym of high energy physics, TOTEM stands for the "TOTal cross" section, Elastic Scattering and Differential Dissociation Measurement in LHC) for significant differences between the cross sections of the protons that break other protons against protons that are closed to antiprotons, and the only way to explain the difference is to revive this decade an old idea for oderone. another word and other forms of exotic particles), but the odd people, however strange they seem, seem the best candidate Totem has discovered something new and fun for the universe, for sure, did TOTEM find a brand new particle? No, because the kerosene are quasi-particles and not particles It still helps us overcome the boundaries of the known physics, for sure, does it break the familiar physics? No, because it was supposed that the Odderones exist in our present understanding. Does all this look a little weird for you? Ask the cosmonaut and Cosmical Radio and Author of Your Place in the Universe . Originally posted on Live Science .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116864562034607, "perplexity": 1050.2264516861217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573439.64/warc/CC-MAIN-20190919040032-20190919062032-00449.warc.gz"}
http://math.stackexchange.com/questions/245165/the-graph-of-a-smooth-real-function-is-a-submanifold?answertab=oldest
# The graph of a smooth real function is a submanifold Given a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ which is smooth, show that $$\operatorname{graph}(f) = \{(x,f(x)) \in \mathbb{R}^{n+m} : x \in \mathbb{R}^n\}$$ is a smooth submanifold of $\mathbb{R}^{n+m}$. I'm honestly completely unsure of where or how to begin this problem. I am interested in definitions and perhaps hints that can lead me in the right direction. - This is an application of the Implicit Function Theorem. You have a function $f : \mathbb{R}^n \to \mathbb{R}^m$ and you construct the graph given by: $\{ (x,y) \in \mathbb{R}^n \times \mathbb{R}^m : y = f(x) \}.$ Let me define a new map, say, $G : \mathbb{R}^{n+m} \to \mathbb{R}^m$ given by $G : (x,y) \mapsto y-f(x).$ I have defined this the way I have so that the graph of $f$ is the zero-level set of $G$, i.e. the graph of $f$ is the set of $(x,y) \in \mathbb{R}^n \times \mathbb{R}^m$ such that $G(x,y) = 0.$ In brutal detail this map is really: $$G : (x_1,\ldots,x_n,y_1,\ldots,y_m) \mapsto (y_1-f_1(x_1,\ldots,x_n),\ldots,y_m-f_m(x_1,\ldots,x_n)) \, .$$ We need to calculate the Jacobian Matrix of $G$. A quick calculation will show you that: $$J_G = \left[\begin{array}{c|c} -J_f & I_m \end{array}\right] ,$$ where $J_f$ is the $m \times n$ Jacobian matrix of $f$ and $I_m$ is the $m \times m$ identity matrix. The matrix $J_G$ is an $m \times (m+n)$ matrix. To be able to apply the IFT, we need to show that $0$ is a regular value of $G$. (After all, the graph of $f$ is $G^{-1}(0).$) We can do this by showing that none of the critical points get sent to 0 by $G$. Notice that $G$ has no critical points because $J_G$ always has maximal rank, i.e. $m$. This is clearly true since the identity matrix $I_m$ has rank $m$. It follows that the graph of $f$ is a smooth, parametrisable $(n+m)-m=n$ dimensional manifold in a neighbourhood of each of its points. - +1 This is surely a better explanation than mine! –  yo' Nov 26 '12 at 19:53 I'm having trouble understanding your definition of G. There are multiple functions $f_1,\ldots , f_m$ because of the values $y_1,\ldots,y_m$ to be equal to those, and if that is the case, wouldn't each of the coordinates of the range be equal to 0? –  Ezea Nov 26 '12 at 19:55 The function $f$ goes from $\mathbb{R}^n$ to $\mathbb{R}^m$, i.e. you give it $n$ numbers and it gives you $m$ numbers back. The numbers you give it are $x_1,\ldots,x_n$ while the numbers it gives you are $f_1,\ldots,f_m.$ In longhand: $$f(x) = f(x_1,\ldots,x_n) = (f_1(x_1,\ldots,x_n), \ldots, f_m(x_1,\ldots,x_n)).$$ Does this help? –  Fly by Night Nov 26 '12 at 20:12 The the graph of $f$ is given by the equation $G(x,y)=0$. As you said: the coordinates of the range of the graph will all be zero. This is what the IFT does: It takes a system of equations and tells you if you get a smooth, parametrisable manifold as the solution space. The trick is to find a system of equations whose solution gives you what you're interested in. –  Fly by Night Nov 26 '12 at 20:23 Ok, that means my real problems lie in my understanding of IFT. That helps a considerable amount now. Thank you! –  Ezea Nov 26 '12 at 20:45 The map $\mathbb R^n\mapsto \mathbb R^{n+m}$ given by $t\mapsto (t, f(t))$ has the Jacobi matrix $\begin{pmatrix}I_n\\f'(t)\end{pmatrix}$, which has a full rank $n$ for all $t$ (because of the identity submatrix). This means that its value range is a manifold. Is there anything unclear about it? How is this a proof that it is a manifold? A manifold of rank $n$ is such set $X$ that for each $x\in X$ there exists a neighborhood $H_x\subset X$ such that $H_x$ is isomorphic to an open subset of $\mathbb R^n$. In this case, the whole $X=graph(f)$ is isomophic to $\mathbb R^n$. The definition of a manifold differs, often it is required for the isomophism to be diffeomophism, which is true here as well. Think of it this way: A manifold $X$ of rank $2$ is something, in which: wherever someone makes a dot there by a pen, I can cut a piece of $X$ and say to this person: "See, my piece is almost like a piece of paper, it's just a bit curvy. The definition of manifold might seems strage here because here you can take the neighborhood as the whole $X$. This is not always the case: A sphere is a manifold as well, but a whole sphere is not isomorphic to $\mathbb R^2$, you have to take only some cut-out of it. - Let me try to reiterate. This matrix comes from the derivative of my graph function where $I_n$ is the derivative of $t$ and $f'(t)$ is the derivative of $f(t)$. That I believe I understand. I'm not sure about how rank $n$ means that the range is a manifold or even exactly what it is to be a submanifold. Could you please elaborate a little on that? –  Ezea Nov 26 '12 at 19:38 It's been long since I had Calculus, so I'm not sure I'll give the exact explanation, but I'll try. –  yo' Nov 26 '12 at 19:40 Your further explanation as well as the other answer were very helpful. +1 Thank you. –  Ezea Nov 26 '12 at 20:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925631046295166, "perplexity": 150.3824986258491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651727.46/warc/CC-MAIN-20150417045731-00195-ip-10-235-10-82.ec2.internal.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=65.35&jrnl=one&onejrnl=mcom
# American Mathematical Society My Account · My Cart · Customer Services · FAQ Publications Meetings The Profession Membership Programs Math Samplings Washington Office In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(65.35) AND publication=(mcom) Sort order: Date Format: Standard display Results: 1 to 30 of 52 found      Go to page: 1 2 [1] Robert K. Brayton, Fred G. Gustavson and Ralph A. Willoughby. Some results on sparse matrices . Math. Comp. 24 (1970) 937-954. MR 0275643. Abstract, references, and article information    View Article: PDF This article is available free of charge [2] David M. Young. Convergence properties of the symmetric and unsymmetric successive overrelaxation methods and related methods . Math. Comp. 24 (1970) 793-807. MR 0281331. Abstract, references, and article information    View Article: PDF This article is available free of charge [3] Ake Björck and Victor Pereyra. Solution of Vandermonde systems of equations . Math. Comp. 24 (1970) 893-903. MR 0290541. Abstract, references, and article information    View Article: PDF This article is available free of charge [4] D. Kershaw. Inequalities on the elements of the inverse of a certain tridiagonal matrix . Math. Comp. 24 (1970) 155-158. MR 0258260. Abstract, references, and article information    View Article: PDF This article is available free of charge [5] P. Schlegel. The explicit inverse of a tridiagonal matrix . Math. Comp. 24 (1970) 665. MR 0273798. Abstract, references, and article information    View Article: PDF This article is available free of charge [6] Robert J. Herbold. A generalization of a class of test matrices . Math. Comp. 23 (1969) 823-826. MR 0258259. Abstract, references, and article information    View Article: PDF This article is available free of charge [7] Richard J. Hanson and Charles L. Lawson. Extensions and applications of the Householder algorithm for solving linear least squares problems . Math. Comp. 23 (1969) 787-812. MR 0258258. Abstract, references, and article information    View Article: PDF This article is available free of charge [8] Jerry A. Walters. Nonnegative matrix equations having positive solutions . Math. Comp. 23 (1969) 827. MR 0258264. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] P. A. Businger. Reducing a matrix to Hessenberg form . Math. Comp. 23 (1969) 819-821. MR 0258255. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] Victor Lovass-Nagy and David L. Powers. Reduction of functions of some partitioned matrices . Math. Comp. 23 (1969) 127-133. MR 0238480. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] Peter A. Businger. Extremal properties of balanced tri-diagonal matrices . Math. Comp. 23 (1969) 193-195. MR 0238476. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] C. H. Yang. On designs of maximal $(+1,\,-1)$-matrices of order $n\equiv 2({\rm mod}\ 4)$. II . Math. Comp. 23 (1969) 201-205. MR 0239748. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] C. W. Gear. A simple set of test matrices for eigenvalue programs . Math. Comp. 23 (1969) 119-125. MR 0238477. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] D. Kershaw. The explicit inverses of two commonly occurring matrices . Math. Comp. 23 (1969) 189-191. MR 0238478. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] Beresford Parlett. Global convergence of the basic ${\rm QR}$ algorithm on Hessenberg matrices . Math. Comp. 22 (1968) 803-817. MR 0247759. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Harold Willis Milnes. A note concerning the properties of a certain class of test matrices. . Math. Comp. 22 (1968) 827-832. MR 0239743. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] T. L. Markham. An iterative procedure for computing the maximal root of a positive matrix . Math. Comp. 22 (1968) 869-871. MR 0239741. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Choong Yun Cho. On the triangular decomposition of Cauchy matrices . Math. Comp. 22 (1968) 819-825. MR 0239740. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] G. Dahlquist, B. Sjöberg and P. Svensson. Comparison of the method of averages with the method of least squares. . Math. Comp. 22 (1968) 833-845. MR 0239742. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] L. A. Hageman and R. B. Kellogg. Estimating optimum overrelaxation parameters . Math. Comp. 22 (1968) 60-68. MR 0229371. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] C. H. Yang. On designs of maximal $(+1,\,-1)$-matrices of order $n\equiv 2({\rm mod}\ 4)$ . Math. Comp. 22 (1968) 174-180. MR 0225476. Abstract, references, and article information    View Article: PDF This article is available free of charge [22] T. L. Jordan. Experiments on error growth associated with some linear least-squares procedures . Math. Comp. 22 (1968) 579-588. MR 0229373. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] Erwin H. Bareiss. Sylvester's identity and multistep integer-preserving Gaussian elimination . Math. Comp. 22 (1968) 565-578. MR 0226829. Abstract, references, and article information    View Article: PDF This article is available free of charge [24] Gilbert C. Best. Powers of a matrix of special type . Math. Comp. 22 (1968) 667-668. MR 0226830. Abstract, references, and article information    View Article: PDF This article is available free of charge [25] S. Charmonman and R. S. Julius. Explicit inverses and condition numbers of certain circulants . Math. Comp. 22 (1968) 428-430. MR 0226831. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Henry E. Fettis and James C. Caslin. Eigenvalues and eigenvectors of Hilbert matrices of order $3$ through $10$ . Math. Comp. 21 (1967) 431-441. MR 0223075. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] F. D. Burgoyne. Practical $L\sp p$ polynomial approximation . Math. Comp. 21 (1967) 113-115. MR 0224254. Abstract, references, and article information    View Article: PDF This article is available free of charge [28] J. Schönheim. Conversion of modular numbers to their mixed radix representation by a matrix formula . Math. Comp. 21 (1967) 253-257. MR 0224252. Abstract, references, and article information    View Article: PDF This article is available free of charge [29] Leopold B. Willner. An elimination method for computing the generalized inverse . Math. Comp. 21 (1967) 227-229. MR 0223082. Abstract, references, and article information    View Article: PDF This article is available free of charge [30] I. Borosh and A. S. Fraenkel. Exact solutions of linear equations with rational coefficients by congruence techniques . Math. Comp. 20 (1966) 107-112. MR 0187379. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 30 of 52 found      Go to page: 1 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750871658325195, "perplexity": 1841.5510262790046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00297.warc.gz"}
https://hal.archives-ouvertes.fr/hal-01792508
The Infinitesimal Moduli Space of Heterotic G$_{2}$ Systems Abstract : Heterotic string compactifications on integrable G$_{2}$ structure manifolds Y with instanton bundles ${(V,A), (TY,\tilde{\theta})}$ yield supersymmetric three-dimensional vacua that are of interest in physics. In this paper, we define a covariant exterior derivative ${\mathcal{D}}$ and show that it is equivalent to a heterotic G$_{2}$ system encoding the geometry of the heterotic string compactifications. This operator ${\mathcal{D}}$ acts on a bundle ${\mathcal{Q}=T^*Y \oplus {\rm End}(V) \oplus {\rm End}(TY)}$ and satisfies a nilpotency condition ${\check{{\mathcal{D}}}^2=0}$ , for an appropriate projection of ${\mathcal D}$ . Furthermore, we determine the infinitesimal moduli space of these systems and show that it corresponds to the finite-dimensional cohomology group ${\check H^1_{\check{{\mathcal{D}}}}(\mathcal{Q})}$ . We comment on the similarities and differences of our result with Atiyah’s well-known analysis of deformations of holomorphic vector bundles over complex manifolds. Our analysis leads to results that are of relevance to all orders in the ${\alpha'}$ expansion. Keywords : Type de document : Article dans une revue Commun.Math.Phys., 2018, 360 (2), pp.727-775. 〈10.1007/s00220-017-3013-8〉 Domaine : Liste complète des métadonnées https://hal.archives-ouvertes.fr/hal-01792508 Contributeur : Inspire Hep <> Soumis le : mardi 15 mai 2018 - 15:13:33 Dernière modification le : mercredi 16 janvier 2019 - 10:21:52 Citation Xenia De La Ossa, Magdalena Larfors, Eirik E. Svanes. The Infinitesimal Moduli Space of Heterotic G$_{2}$ Systems. Commun.Math.Phys., 2018, 360 (2), pp.727-775. 〈10.1007/s00220-017-3013-8〉. 〈hal-01792508〉 Métriques Consultations de la notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315876364707947, "perplexity": 1492.6728832519236}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00019.warc.gz"}
https://mathhelpboards.com/threads/locus-in-the-complex-plane.811/
# Locus in the complex plane. #### jacks ##### Well-known member Apr 5, 2012 226 Area of Region Bounded by the locus of $z$ which satisfy the equation $$\displaystyle \arg \left(\frac{z+5i}{z-5i}\right) = \pm \frac{\pi}{4}$$ is #### Mr Fantastic ##### Member Jan 26, 2012 66 Area of Region Bounded by the locus of $z$ which satisfy the equation $$\displaystyle \arg \left(\frac{z+5i}{z-5i}\right) = \pm \frac{\pi}{4}$$ is What have you tried? #### Mr Fantastic ##### Member Jan 26, 2012 66 Area of Region Bounded by the locus of $z$ which satisfy the equation $$\displaystyle \arg \left(\frac{z+5i}{z-5i}\right) = \pm \frac{\pi}{4}$$ is You can take a geometric approach. Your relation can be written $$\arg(z + 5) - \arg(z - 5) = \pm \frac{\pi}{4}$$, that is, $$\alpha - \beta =\pm \frac{\pi}{4}$$. Consider the line segment joining z = 5 and z = -5 as the chord on a circle and consider the rays $$\arg(z +5) = \alpha$$ and $$\arg(z - 5) = \beta$$ subject to the restriction $$\alpha - \beta =\pm \frac{\pi}{4}$$. Consider the intersection of these rays and the angle between them at their intersection point. The angle is constant .... Now think of a circle theorem involving angles subtended by the same arc at the circumference ..... It's not hard to see you that have a circle with 'holes' at z = 5 and z = -5 (why?). Now your job is to determine the radius of this circle and use it to get the area. Last edited:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834454655647278, "perplexity": 379.9511440435738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00447.warc.gz"}
https://www.physicsforums.com/threads/potential-energy-for-magnetic-fields.269770/
# Potential energy for magnetic fields 1. Nov 6, 2008 ### jaejoon89 1. The problem statement, all variables and given/known data A circular 10 turn coil that has a radius of 0.05 m and current of 5A lies in the xy plane with a uniform magnetic field B = 0.05 T i + 0.12 T k (i and k are the unit vectors). What's the potential energy for the system??? 2. Relevant equations U = -m*B where m is the dipole moment = I*A 3. The attempt at a solution B = sqrt((0.05 T)^2 + (0.12 T)^2) = 0.13 T So for this I would get U = -0.00511 J, but the answer key says -0.000472 J... where's the mistake??? 2. Nov 6, 2008 ### jaejoon89 I'm assuming from the answer that the magnetic moment must not be aligned with the field, but how do you know this given the problem? And how do you calculate this? 3. Oct 5, 2010 ### Olfus Well, we have to keep in mind that -m*B is actually a "dot product". ;)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443336129188538, "perplexity": 943.8873539323033}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660242.52/warc/CC-MAIN-20160924173740-00282-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/actions-and-cross-sections.819049/
Actions and Cross Sections 1. Jun 14, 2015 Cluelessluke Can someone point towards how to derive that the cross section is proportional to the imaginary part of the Action? Also, I thought the Action was a real number? Thanks! 2. Jun 14, 2015 fzero You are probably referring to the Optical Theorem. In that case, $S$ is not the action but the scattering matrix (S-matrix), which is basically $S = e^{iHt}$. An explanation of the scattering matrix and Optical Theorem can be found in http://www.itp.phys.ethz.ch/research/qftstrings/archive/12HSQFT1/Chapter10.pdf [Broken]. Last edited by a moderator: May 7, 2017 3. Jun 14, 2015 Cluelessluke Thanks for the reply! To be more specific, I'm referring to equation (14) in http://arxiv.org/pdf/1206.5311v2.pdf. They have an e^{-2Im(S)} contribution in their cross section (where I believe S in the action not the S-matrix) and I'm having a hard time seeing where it comes from. Last edited: Jun 14, 2015 4. Jun 14, 2015 fzero Their equation (3) expresses the cross-section in terms of the S-matrix and they credit reference [7] with a calculation in the path-integral formalism that introduces the action. It is natural in the path-integral formulation that the action would appear,. Afterwards, they suggest that the expression is dominated by a saddle-point in a certain limit that takes $g\rightarrow 0$. This saddle-point approximation is closely related to the WKB approximation that should be familiar from ordinary QM. What is happening is that, in this limit, the classical paths (critical points of the action) dominate the path integral, so the path integral expression can be approximated by their result $\exp W$. As to why the action can be complex, I would suggest looking at their references for the details that they're clearly leaving out. There is some discussion of working in the Euclidean formalism, but I can't follow them well enough to give a concrete explanation. You should try to understand the details of their arguments (perhaps some of their references might give further details), but you should know that the fact that they can express the cross section in terms of the imaginary part of the action is not a general rule. The Optical Theorem is general, but the expression from this paper relies on this physical problem having the correct properties to allow the saddle point approximation to work. There are many examples of physics problems where the saddle point approximation is useful, so it's worth learning why it works here. However the statement you present in your OP is most definitely not true in general. 5. Jun 14, 2015 Cluelessluke Great! Thanks so much for your help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813278675079346, "perplexity": 319.7926044534537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589270.3/warc/CC-MAIN-20180716115452-20180716135452-00253.warc.gz"}
http://www.maths.ed.ac.uk/node/348
# Finn Lindgren Stochastic space-time models, non-trivial observation mechanisms, and practical inference Many natural phenomena can be modelled hierarchically, with latent random fields taking the role of unknown quantities for which we may have some idea about smoothness properties and multiscale behaviour. The mechanisms generating observed data can have additional unknown properties, such as animal detection probabilities depending on the size of a group of dolphins, or temperature biases depending on the local terrain near a weather station. Combining these models and mechanisms often lead to non-Gaussian likelihoods or Bayesian posteriors, requiring careful thought to construct computationally efficient practical inference methods. I will discuss these issues in the context of some recent and ongoing work for spatially resolved animal abundance estimation and historical climate reconstruction.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482073545455933, "perplexity": 1542.4155965845036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806569.66/warc/CC-MAIN-20171122103526-20171122123526-00475.warc.gz"}
https://physics.stackexchange.com/questions/435146/energy-of-an-object
Energy of an object When an object goes up we say that it gained potential energy but it is doing positive work on the earth so it should lose energy.Please correct me. • Who's doing the work? – user191954 Oct 17 '18 at 16:02 • The object which is going up is doing the work. – user64348 Oct 18 '18 at 8:27 • @user64348 Don't you mean something or someone is doing work against gravity to raise the object? – Bob D Oct 21 '18 at 22:40 When an object goes up we say that it gained potential energy Yes. but it is doing positive work on the earth Make sure you get the signs right. Potential energy is negative mgh. The positive work you do is offset by the negative PE. The total energy is zero before, during and after the move. When an object goes up in the earth’s gravitational field it loses KE and gains PE. The work done on the object is negative because the force is in the opposite direction of the motion. By Newton’s third law there is an equal and opposite force on the earth. However, because the earth does not move no work is done on the earth. When an object of mass $$m$$ goes up near the surface of the earth, something or somebody else is doing work on the object to raise it. The object itself is not doing work on the earth and it is not losing energy. The something or somebody doing the work on the object against gravity to raise it is losing energy, but it is transferring that energy to the object in the form of increased gravitational potential energy. There is no need for there to be a net increase or decrease in kinetic energy. The key is the object needs to begin at rest and end at rest. In order to accomplish this, we initially need to apply an external upward force, $$F_{ext}$$, slightly greater than $$mg$$, to give it small upward acceleration $$a$$. Let’s say we do this for a brief time $$dt$$ and therefore over a short distance $$dh$$. The mass thereby attains a small velocity $$v=adt$$, a small increase in kinetic energy of $$½ m(adt)^2$$ and a small increase in the potential energy of the mass m of $$(mg) dh$$. We now immediately reduce our upward force so that it equals the downward gravitational force. The mass is now rising at constant velocity v and so there is no subsequent change in KE, however its potential energy keeps increasing since the mass continues to rise. The work to accomplish this increase in potential energy is due to our constant application of an upward force equal to the force of gravity. As we approach point 2 we are still left with the small kinetic energy. Therefore, prior to reaching point 2, we reduce the external upward force, $$F_{ext}$$, to slightly less than $$mg$$, to give it a small negative acceleration. We do this for sufficient time to bring the mass to rest at point 2 a height $$h$$ above point 1. During this period gravity now does a small amount of negative work resulting in a change in kinetic energy of $$-½ m(adt)^2$$. Consequently the total change in kinetic energy going from point 1 to point 2 is zero. Since there is no loss in height during this period, there is no loss in potential energy. The end result in going from 1 to 2 is an increase in gravitational potential energy of $$\int_1^2 (mg)dh = (mg)h$$. with no net change in kinetic energy. This increase in potential energy comes, of course, came from the external agent that did work on the object. Hope this helps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205509424209595, "perplexity": 174.87226986121178}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00502.warc.gz"}
http://commens.org/dictionary/entry/quote-syllabus-syllabus-course-lectures-lowell-institute-beginning-1903-nov-23-s-33
# The Commens DictionaryQuote from ‘Syllabus: Syllabus of a course of Lectures at the Lowell Institute beginning 1903, Nov. 23. On Some Topics of Logic’ Quote: Separation of Secondness, or Secundal Separation, called Precission, consists in supposing a state of things in which one element is present without the other, the one being logically possible without the other. Thus, we cannot imagine a sensuous quality without some degree of vividness. But we usually suppose that redness, as it is in red things, has no vividness; and it would certainly be impossible to demonstrate that everything red must have a degree of vividness. Date: 1903 References: EP 2:270 Citation: ‘Prescission’ (pub. 18.07.15-13:02). Quote in M. Bergman & S. Paavola (Eds.), The Commens Dictionary: Peirce's Terms in His Own Words. New Edition. Retrieved from http://www.commens.org/dictionary/entry/quote-syllabus-syllabus-course-lectures-lowell-institute-beginning-1903-nov-23-s-33. Posted: Jul 18, 2015, 13:02 by Mats Bergman Last revised: Jul 18, 2015, 17:56 by Mats Bergman
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016438484191895, "perplexity": 4872.69885743181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00207.warc.gz"}
https://math.stackexchange.com/questions/3235077/find-the-coefficient-of-the-power-series-x31-x-11-2x6
# Find the coefficient of the power series $[x^3](1-x)^{-1}(1-2x)^6$ I need to find $$[x^3](1-x)^{-1}(1-2x)^6$$, where $$[x^3]$$ means the coefficent of the $$[x^3]$$ term. here's what I've done: $$[x^3](1-x)^{-1}(1-2x)^6=[x^3](\sum_{k=0}^6 {6\choose k}(-2x)^k)(\sum_{m=0}^\infty {m\choose 0}x^m)$$ $$= \sum_{k=0}^6 {6\choose k}(-2)^k[x^{3-k} ](\sum_{m=0}^\infty {m\choose 0}x^m)$$ $$= \sum_{k=0}^3 {6\choose k}(-2)^k[x^{3-k} ](\sum_{m=0}^\infty {m\choose 0}x^m)$$ since we need $$3-k \geq 0$$ $$= \sum_{k=0}^3 ({6\choose k}(-2)^k {3-k\choose 0})$$ $$= \sum_{k=0}^3 ({6\choose k}(-2)^k)$$ $$= {6\choose0} + (-2){6\choose1} + (4){6\choose2} + (-8){6\choose3}$$ $$=1-12+60-160$$ $$= -111$$ But when I do the expansion on WolframAlpha, I see that $$[x^0]=1$$, $$[x^1]=-12$$, $$[x^3]=-160$$, so what am I doing wrong? (I am following a similar idea to Trevor Gunn's answer in this question In how many ways the sum of 5 thrown dice is 25?) • Did you confuse $(-x)^k$ and $x^{-k}$? Also, did you mean $x^3$ where you wrote $x^4$? – J. W. Tanner May 22 at 0:29 • I might have worded the question confusingly, the $[x^3]$ is not multiplication, but rather finding the coefficient of the $x^3$ term in the equation that follows – Mark Dodds May 22 at 0:36 • Oh, then maybe you should say finding the coefficient of $x^3$ in ... – J. W. Tanner May 22 at 0:37 • Your work is correct, and this WolframAlpha link verifies it. What specifically did you see that made you think you were wrong? – Mike Earnest May 22 at 0:55 • @JohnOmielan looking back i think that is what has happened. Thanks for clearing that up – Mark Dodds May 22 at 0:59 As Mike Earnest confirmed in the comments, your work is correct. As I commented, and you've stated it's likely the case, the WolframAlpha results of $$[x^0]=1$$, $$[x^1]=-12$$, $$[x^3]=-160$$ probably come from the coefficients in the power expansion of $$(1-2x)^6$$ instead. You can see this directly from the first, second & fourth terms in your second last highlighted line, i.e., $$=1-12+60-160$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941224813461304, "perplexity": 421.3833637249228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00170.warc.gz"}
http://tex.stackexchange.com/questions/20096/proper-way-to-use-ensuremath-to-define-a-macro-useable-in-and-out-of-math-mode
Proper way to use \ensuremath to define a macro useable in and out of math mode Based on this solution related to defining a macro I came up with this macro to help me define a macro that I can use either in or outside of math mode. The example as is functions as I want. However, this solution requires me to NOT put $$ around the second parameter to to the \DefineNamedFunction macro. I would like to be able to include the$$, or not include it. One solution is to modify \DefineNamedFunction to strip out the $$ if it is included in the macro call using the xstring pacakge, but this to me feels like a hack, and am thinking that there is probably a cleaner TeX way to do this. So to summarize: How do I change \DefineNamedFunction such that I can use both the commented and uncommented calls to this macro, and still be able to use the definition inside and outside of math mode? \documentclass{article} \usepackage{amsmath} \usepackage{xcolor} \newcommand{\DefineNamedFunction}[2]{% {FunctionName}{FunctionExpression} \expandafter\providecommand\expandafter{\csname#1\endcsname}{\textcolor{red}{\ensuremath{#2}}}% } \begin{document} \DefineNamedFunction{FunctionF}{y = 2 \sin x} %\DefineNamedFunction{FunctionF}{y = 2 \sin x} I can use FunctionF inside math mode as \FunctionF, but can also use this outside of math mode as \FunctionF. \end{document} - 2 Answers Probably \ensuremath{\textcolor{red}{#2}} is what you need, since \textcolor can be used in text and in math. The complete definition is \newcommand{\DefineNamedFunction}[2]{% {FunctionName}{FunctionExpression} \expandafter\providecommand\csname#1\endcsname {\ensuremath{\textcolor{red}{#2}}}% } ... \DefineNamedFunction{FunctionF}{y=2\sin x} \FunctionF and \FunctionF I've also deleted the braces that require another \expandafter, but that's not the problem. Of course, you can't call \DefinedNamedFunction{FunctionFF}{y=x} and I wouldn't know why you'd want it. But in any case there's a simple solution \newcommand{\DefineNamedFunction}[2]{% \expandafter\providecommand\csname#1\endcsname {\ensuremath{\begingroup\color{red}\DNFnorm#2\endgroup}}} \makeatletter \def\DNFnorm{\@ifnextchar\DNFnormi{}} \def\DNFnormi#1{#1} \makeatother The input is "normalized" by removing the tokens before and after, if present. With \begingroup\color{red}...\endgroup the spaces in the subformula participate to the stretching and shrinking of the spaces in the line. - That does not seem to behave any differently. Yes, in the MWE I should have switched it to \newcommand, but I need \providecommand in my real usage. – Peter Grill Jun 6 '11 at 18:22 @Peter: really? To me it does exactly what you want. I'll edit the answer with the complete definition. – egreg Jun 6 '11 at 19:57 Only reason I would want to be able to call \DefinedNamedFunction{FunctionFF}{y=x} (with the dollar signs) as it is more natural to do so. I didn't want to have to remember that when I use this macro I do not put the $$, but for all the other macros where I have math, I do put the . – Peter Grill Jun 6 '11 at 20:16 This solution using DNFnorm looks simpler. – Peter Grill Jun 6 '11 at 20:45 @Peter: it's just a command defined for the purpose. It checks whether #2 starts with $; in this case it is substituted by \DNFnormi that throws away the two $, otherwise it does nothing. – egreg Jun 6 '11 at 20:51 \documentclass{article} \usepackage{amsmath} \usepackage{xcolor} \makeatletter \def\DefineNamedFunction#1#2{\expandafter\DefineNamedFunction@i#1\@nil#2\@nil} \def\DefineNamedFunction@i#1\@nil{% \@ifnextchar${\DefineNamedFunction@ii{#1}}{\DefineNamedFunction@iii{#1}}} \def\DefineNamedFunction@ii#1$#2$\@nil{% \@namedef{#1}{\ifmmode\textcolor{red}{#2}\else\textcolor{red}{$#2$}\fi}} \def\DefineNamedFunction@iii#1#2\@nil{% \@namedef{#1}{\ifmmode\textcolor{red}{#2}\else\textcolor{red}{$#2$}\fi}} \makeatother \DefineNamedFunction{FunctionF}{y = 2 \sin x} \DefineNamedFunction{FunctionFF}{$y = 2 \sin x$} \begin{document} I can use \FunctionF\ inside math mode as$\FunctionF$, but can also use this outside of math mode as$\FunctionFF$and \FunctionFF. \end{document} - This works great. Thanks. So, is this basically doing the job of \ensuremath manually? – Peter Grill Jun 6 '11 at 18:27 more or less ... the problem is \textcolor when it is used in math mode with a math argument – Herbert Jun 6 '11 at 18:30 @Herbert: apart from making the + an ordinary atom, $\textcolor{blue}{a}\textcolor{red}{+}\textcolor{green}{b}$ works perfectly. – egreg Jun 6 '11 at 20:08 @egreg: I was talking about $\textcolor{blue}{$a$}\$ ... – Herbert Jun 7 '11 at 4:25 @Herbert: I see; \textcolor is smart enough to know when it's called in math mode: maybe a pair \textcolor and \mathcolor would have been better. – egreg Jun 7 '11 at 8:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931913018226624, "perplexity": 1550.3653272746726}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00051-ip-10-164-35-72.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Adiabatic_lapse_rate
# Lapse rate The lapse rate is defined as the rate at which atmospheric temperature decreases with increase in altitude. [1][2] The terminology arises from the word lapse in the sense of a decrease or decline. While most often applied to Earth's troposphere, the concept can be extended to any gravitationally supported ball of gas. ## Definition A formal definition from the Glossary of Meteorology[3] is: The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified. In the lower regions of the atmosphere (up to altitudes of approximately 12,000 metres (39,000 ft), temperature decreases with altitude at a fairly uniform rate. Because the atmosphere is warmed by convection from Earth's surface, this lapse or reduction in temperature is normal with increasing distance from the conductive source. Although the actual atmospheric lapse rate varies, under normal atmospheric conditions the average atmospheric lapse rate results in a temperature decrease of 6.4C°/km (3.5F° or 1.95C°/1,000 ft) of altitude above ground level. The measurable lapse rate is affected by the moisture content of the air (humidity). A dry lapse rate of 10C°/km (5.5F° or 3.05C°/1,000 ft) is often used to calculate temperature changes in air not at 100% relative humidity. A wet lapse rate of 5.5C°/km (3F° or 1.68C°/1,000 ft) is used to calculate the temperature changes in air that is saturated (i.e., air at 100% relative humidity). Although actual lapse rates do not strictly follow these guidelines, they present a model sufficiently accurate to predict temperate changes associated with updrafts and downdrafts. This differential lapse rate (dependent upon both difference in conductive heating and adiabatic expansion or compression) results in the formation of warm downslope winds (e.g., Chinook winds, Santa Ana winds, etc.). The atmospheric lapse rate, combined with adiabatic cooling and heating of air related to the expansion and compression of atmospheric gases, present a unified model explaining the cooling of air as it moves aloft and the heating of air as it descends downslope. Atmospheric stability can be measured in terms of lapse rates (i.e., the temperature differences associated with vertical movement of air). The atmosphere is considered conditionally unstable where the environmental lapse rate causes a slower decrease in temperature with altitude than the dry adiabatic lapse rate, as long as no latent heat is released (i.e. the saturated adiabatic lapse rate applies). Unconditional instability results when the dry adiabatic lapse rate causes air to cool slower than the environmental lapse rate, so air will continue to rise until it reaches the same temperature as its surroundings. Where the saturated adiabatic lapse rate is greater than the environmental lapse rate, the air cools faster than its environment and thus returns to its original position, irrespective of its moisture content. Although the atmospheric lapse rate (also known as the environmental lapse rate) is most often used to characterize temperature changes, many properties (e.g. atmospheric pressure) can also be profiled by lapse rates... ## Mathematical definition In general, a lapse rate is the negative of the rate of temperature change with altitude change, thus: $\gamma = -\frac{dT}{dz}$ where $\gamma$ is the lapse rate given in units of temperature divided by units of altitude, T = temperature, and z = altitude. Note: In some cases, $\Gamma$ or $\alpha$ can be used to represent the adiabatic lapse rate in order to avoid confusion with other terms symbolized by $\gamma$, such as the specific heat ratio[4] or the psychrometric constant.[5] ## Types of lapse rates There are two types of lapse rate: • Environmental lapse rate (ELR) – which refers to the actual change of temperature with altitude for the stationary atmosphere (i.e. the temperature gradient) • The adiabatic lapse rates – which refer to the change in temperature of a parcel of air as it moves upwards (or downwards) without exchanging heat with its surroundings. There are two adiabatic rates:[6] • Dry adiabatic lapse rate (DALR) • Moist (or saturated) adiabatic lapse rate (SALR) ### Environmental lapse rate The environmental lapse rate (ELR), is the rate of decrease of temperature with altitude in the stationary atmosphere at a given time and location. As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.49 K(C°)/1,000 m[citation needed] (3.56 F° or 1.98 K(C°)/1,000 ft) from sea level to 11 km (36,090 ft or 6.8 mi). From 11 km up to 20 km (65,620 ft or 12.4 mi), the constant temperature is −56.5 °C (−69.7 °F), which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture. Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude. Emagram diagram showing variation of dry adiabats (bold lines) and moist adiabats (dash lines) according to pressure and temperature The dry adiabatic lapse rate (DALR) is the rate of temperature decrease with altitude for a parcel of dry or unsaturated air rising under adiabatic conditions. Unsaturated air has less than 100% relative humidity; i.e. its actual temperature is higher than its dew point. The term adiabatic means that no heat transfer occurs into or out of the parcel. Air has low thermal conductivity, and the bodies of air involved are very large, so transfer of heat by conduction is negligibly small. Under these conditions when the air rises (for instance, by convection) it expands, because the pressure is lower at higher altitudes. As the air parcel expands, it pushes on the air around it, doing work (thermodynamics). Since the parcel does work but gains no heat, it loses internal energy so that its temperature decreases. The rate of temperature decrease is 9.8 C°/km (5.38 F° per 1,000 ft) (3.0 C°/1,000 ft). The reverse occurs for a sinking parcel of air.[7] $P dV = -V dP / \gamma$ the first law of thermodynamics can be written as $m c_v dT - V dp/ \gamma = 0$ Also since :$\alpha = V/m$ and :$\gamma = c_p/c_v$ we can show that: $c_p dT - \alpha dP = 0$ where $c_p$ is the specific heat at constant pressure and $\alpha$ is the specific volume. Assuming an atmosphere in hydrostatic equilibrium:[8] $dP = - \rho g dz$ where g is the standard gravity and $\rho$ is the density. Combining these two equations to eliminate the pressure, one arrives at the result for the DALR,[9] $\Gamma_d = -\frac{dT}{dz}= \frac{g}{c_p} = 9.8 \ ^{\circ}\mathrm{C}/\mathrm{km}$ When the air is saturated with water vapor (at its dew point), the moist adiabatic lapse rate (MALR) or saturated adiabatic lapse rate (SALR) applies. This lapse rate varies strongly with temperature. A typical value is around 5 C°/km (2.7 F°/1,000 ft) (1.5C°/1,000 ft).[citation needed] The reason for the difference between the dry and moist adiabatic lapse rate values is that latent heat is released when water condenses, thus decreasing the rate of temperature drop as altitude increases. This heat release process is an important source of energy in the development of thunderstorms. An unsaturated parcel of air of given temperature, altitude and moisture content below that of the corresponding dewpoint cools at the dry adiabatic lapse rate as altitude increases until the dewpoint line for the given moisture content is intersected. As the water vapor then starts condensing the air parcel subsequently cools at the slower moist adiabatic lapse rate if the altitude increases further. The saturated adiabatic lapse rate is given approximately by this equation from the glossary of the American Meteorology Society:[10] $\Gamma_w = g\, \frac{1 + \dfrac{H_v\, r}{R_{sd}\, T}}{c_{p d} + \dfrac{H_v^2\, r}{R_{sw}\, T^2}}= g\, \frac{1 + \dfrac{H_v\, r}{R_{sd}\, T}}{c_{p d} + \dfrac{H_v^2\, r\, \epsilon}{R_{sd}\, T^2}}$ where: $\Gamma_w$ = Wet adiabatic lapse rate, K/m $g$ = Earth's gravitational acceleration = 9.8076 m/s2 $H_v$ = Heat of vaporization of water, = 2501000 J/kg $R_{sd}$ = Specific gas constant of dry air = 287 J kg−1 K−1 $R_{sw}$ = Specific gas constant of water vapor = 461.5 J kg−1 K−1 $\epsilon=\frac{R_{sd}}{R_{sw}}$ =The dimensionless ratio of the specific gas constant of dry air to the specific gas constant for water vapor = 0.622 $e$ = The water vapor pressure of the saturated air $p$ = The pressure of the saturated air $r=\epsilon e/(p-e)$ = The mixing ratio of the mass of water vapor to the mass of dry air [11] $T$ = Temperature of the saturated air, K $c_{pd}$ = The specific heat of dry air at constant pressure, = 1003.5 J kg−1 K−1 ### Thermodynamic-based lapse rate Robert Essenhigh developed a comprehensive thermodynamic model of the lapse rate based on the Schuster–Schwarzschild (S–S) integral equations of transfer that govern radiation through the atmosphere including absorption and radiation by greenhouse gases.[12][13] His solution "predicts, in agreement with the Standard Atmosphere experimental data, a linear decline of the fourth power of the temperature, T4, with pressure, P, and, as a first approximation, a linear decline of T with altitude, h, up to the tropopause at about 10 km (the lower atmosphere)."[13] The predicted normalized density ratio and pressure ratio differ and fit the experimental data well.[citation needed] Sreekanth Kolan extended Essenhigh's model to include the energy balance for the lower and upper atmospheres.[14][self-published source?][third-party source needed] ## Significance in meteorology The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds). As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about −2 C° per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters. The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around 8 C° per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/C°. If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely. If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL). If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon over many land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased. Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals). The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America). ## References 1. ^ Mark Zachary Jacobson (2005). Fundamentals of Atmospheric Modeling (2nd ed.). Cambridge University Press. ISBN 0-521-83970-X. 2. ^ C. Donald Ahrens (2006). Meteorology Today (8th ed.). Brooks/Cole Publishing. ISBN 0-495-01162-2. 3. ^ Todd S. Glickman (June 2000). Glossary of Meteorology (2nd ed.). American Meteorological Society, Boston. ISBN 1-878220-34-9. (Glossary of Meteorology) 4. ^ Salomons, Erik M. (2001). Computational Atmospheric Acoustics (1st ed.). Kluwer Academic Publishers. ISBN 1-4020-0390-0. 5. ^ Stull, Roland B. (2001). An Introduction to Boundary Layer Meteorology (1st ed.). Kluwer Academic Publishers. ISBN 90-277-2769-4. 6. ^ Adiabatic Lapse Rate, IUPAC Goldbook 7. ^ Danielson, Levin, and Abrams, Meteorology, McGraw Hill, 2003 8. ^ Landau and Lifshitz, Fluid Mechanics, Pergamon, 1979 9. ^ Kittel and Kroemer, Thermal Physics, Freeman, 1980; chapter 6, problem 11 10. ^ [1] 11. ^ http://glossary.ametsoc.org/wiki/Mixing_ratio 12. ^ Robert H. Essenhigh (2003). "Prediction from an Analytical Model of: The Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere; and Potential for Profiles-Perturbation by Combustion Emissions". Paper No.03F-44: Western States Section Combustion Institute Meeting: Fall (October) 2003. 13. ^ a b 14. ^ Sreekanth Kolan (2009). "Study of energy balance between lower and upper atmosphere". Ohio State University. osu1259613805.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566977977752686, "perplexity": 1727.6892836937132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638820.85/warc/CC-MAIN-20150417045718-00117-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/the-speed-boat-still-water-15-km-hr-it-can-go-30-km-upstream-return-downstream-original-point-4-hours-30-minutes-find-speed-stream-quadratic-equations_27486
Share # The Speed of a Boat in Still Water is 15 Km/Hr. It Can Go 30 Km Upstream and Return Downstream to the Original Point in 4 Hours 30 Minutes. Find the Speed of the Stream. - Mathematics Course #### Question By default showhide Solutions The speed of a boat in still water is 15 km/hr. It can go 30 km upstream and return downstream to the original point in 4 hours 30 minutes. Find the speed of the stream. #### SolutionShow Solution Let the speed of the stream be x km/hr. ∴ Speed of the boat downstream = (15 + x) km/hr Speed of the boat upstream = (15 – x) km/hr Time taken to come back = 30/(15 - x) hr From the given information 30/(15 + x) + 30/(15 - x) = 4 30/60 30/(15 + x) + 30/(15 - x) = 9/2 (450 - 30x + 450 + 30x)/((15 + x)(15 - x)) = 9/2 900/(225 - x^2) = 9/2 100/(225 + x^2) = 1/2 225 - x^2 = 200 x^2 = 25 x = +- 5` But, x cannot be negative, so, x = 5. Thus, the speed of the stream is 5 km/hr. Is there an error in this question or solution?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833080530166626, "perplexity": 1208.504076218333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00358.warc.gz"}
http://mathhelpforum.com/advanced-algebra/78867-torsion-coefficients.html
# Math Help - Torsion coefficients 1. ## Torsion coefficients How would I go about finding the torsion coefficients of Z10 x Z36 x Z14 x Z21? I think the first stage is to write 10, 36, 14 and 31 as products of primes but I'm not sure where to go from there. 2. Originally Posted by d_p_osters How would I go about finding the torsion coefficients of Z10 x Z36 x Z14 x Z21? I think the first stage is to write 10, 36, 14 and 31 as products of primes but I'm not sure where to go from there. I am not exactly sure what you are asking but I it seems to me an exercise in the classification theorem for abelian groups. You need to bring this to standard form. Note $\mathbb{Z}_{10} \simeq \mathbb{Z}_2\times \mathbb{Z}_5$, and so on. Now replace each factor with the equivalent isomorphic form to bring this expression into a direct product of groups of the form $\mathbb{Z}_{p^k}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9193902015686035, "perplexity": 164.51141407744882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827791.21/warc/CC-MAIN-20160723071027-00126-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-14-calculus-of-vector-valued-functions-14-1-vector-valued-functions-exercises-page-710/19
Calculus (3rd Edition) Circle in the xz-plane of radius $1$ centered at $(0,0,4)$ We put $$x=\sin t,\quad y=0, \quad z=4+\cos t$$ hence we get $$x^2+(z-4)^2= \sin^2t+\cos^2t=1.$$ Which is a circle in the xz-plane of raduis $1$ centered at $(0,0,4)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992293119430542, "perplexity": 336.85991209057204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00127.warc.gz"}
https://wusa.ca/about/your-money/funding/student-life-endowment-funding-application-form/
Click or drag files to this area to upload. You can upload up to 4 files. Please ensure that the estimate is a formal one. Include Work Requests and cost estimates from Plant Operations. If you are putting in a request for hard goods (e.g. microwaves, furniture, equipment) you MUST include at least TWO invoices. Without this documentation, the committee will not consider your request. If your file too large then E-mail it to [email protected] Files must be less than 64 MB. Allowed file types: gif jpg jpeg png txt pdf doc docx ppt pptx. Click or drag files to this area to upload. You can upload up to 4 files. Please attach a timeline of activities for your project including any prep work. The timeline should include a proposed date of completion. Please note that without the documentation, your application will not be considered. Files must be less than 64 MB. Allowed file types: pdf doc docx ppt pptx. Please make sure of the following before submitting the form or your application will not be considered:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254453539848328, "perplexity": 3789.8889768582508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00461.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/161580-fourier-series-addition-cosine-sine-different-frequencies2.html
# Thread: Fourier series - addition of cosine and sine of different frequencies2 1. ## Fourier series - addition of cosine and sine of different frequencies2 I'm trying to find the fourier series of, f(t) = cos(4t) + sin(6t) I know the period is pi and wo = 2. The equation I have been using is an = (2 / pi) * integral(0 to pi) [ cos(4t)*cos( wo*n*t)] then I add this to the corresponding "an" of sin(6t). I keep getting zeros for all constants ao, an, and bn. My question is, what do I use for "wo?" Do I use 2 for all constants? Or do I use the particular wo for each term, e.g., 4 for cos(4t) and 6 for sin(6t). Are my limits of integration correct? I thought you went to 0 to T where T is the period. Thanks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219678640365601, "perplexity": 1249.5063342888743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707188217/warc/CC-MAIN-20130516122628-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/1205.6926/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Indirect Coulomb Energy for Two-Dimensional Atoms Rafael D. Benguria  and  Matěj Tušek Departmento de Física, P. Universidad Católica de Chile, Departmento de Física, P. Universidad Católica de Chile, ###### Abstract. In this manuscript we provide a family of lower bounds on the indirect Coulomb energy for atomic and molecular systems in two dimensions in terms of a functional of the single particle density with gradient correction terms. ## 1. Introduction Since the advent of quantum mechanics, the impossibility of solving exactly problems involving many particles has been clear. These problems are of interest in such areas as atomic and molecular physics, condensed matter physics, and nuclear physics. It was, therefore, necessary from the early beginnings to estimate various energy terms of a system of electrons as functionals of the single particle density , rather than as functionals of their wave function . The first estimates of this type were obtained by Thomas and Fermi in 1927 (see [14] for a review), and by now they have given rise to a whole discipline under the name of Density Functional Theory (see, e.g., [1] and references therein). In Quantum Mechanics of many particle systems the main object of interest is the wavefunction , (the antisymmetric tensor product of ). More explicitly, for a system of fermions, , in view of Pauli’s Exclusion Principle, and . Here, denote the coordinates of the -th particle. From the wavefunction one can define the one–particle density (single particle density) as ρψ(x)=N∫R3(N−1)|ψ(x,x2,…,xN)|2dx2…dxN, (1) and from here it follows that , the number of particles, and is the density of particles at . Notice that since is antisymmetric, is symmetric, and it is immaterial which variable is set equal to in (1). In Atomic and Molecular Physics, given that the expectation value of the Coulomb attraction of the electrons by the nuclei can be expressed in closed form in terms of , the interest focuses on estimating the expectation value of the kinetic energy of the system of electrons and the expectation value of the Coulomb repulsion between the electrons. Here, we will be concerned with the latest. The most natural approximation to the expectation value of the Coulomb repulsion between the electrons is given by D(ρψ,ρψ)=12∫ρψ(x)1|x−y|ρψ(y)dxdy, (2) which is usually called the direct term. The remainder, i.e., the difference between the expectation value of the electronic repulsion and , say , is called the indirect term. In 1930, Dirac [6] gave the first approximation to the indirect Coulomb energy in terms of the single particle density. Using an argument with plane waves, he approximated by E≈−cD∫ρ4/3ψdx, (3) where (see, e.g., [20], p. 299). Here we use units in which the absolute value of the charge of the electron is one. The first rigorous lower bound for was obtained by E.H. Lieb in 1979 [13], using the Hardy–Littlewood Maximal Function [27]. There he found that, . The constant was substantially improved by E.H. Lieb and S. Oxford in 1981 [16], who proved the bound E≥−C∫ρ4/3ψdx, (4) with . In their proof, Lieb and Oxford used Onsager’s electrostatic inequality [22], and a localization argument. The best value for is unknown, but Lieb and Oxford [16] proved that it is larger or equal than . The Lieb–Oxford value was later improved to by Chan and Handy, in 1999 [5]. Since the work of Lieb and Oxford [16], there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradient of the single particle density. This interest arises with the expectation that states with a relatively small kinetic energy have a smaller indirect part (see, e.g., [11, 24, 28] and references therein). Recently, Benguria, Bley, and Loss obtained an alternative to (4), which has a lower constant (close to ) to the expense of adding a gradient term (see Theorem 1.1 in [2]), which we state below in a slightly modified way, ###### Theorem 1.1 (Benguria, Bley, Loss [2]). For any normalized wave function and any we have the estimate E(ψ)≥−1.4508(1+ϵ)∫R3ρ4/3ψdx−32ϵ(√ρψ,|p|√ρψ) (5) where (√ρ,|p|√ρ):=∫R3|ˆ√ρ(k)|2|2πk|dk=12π2∫R3∫R3|√ρ(x)−√ρ(y)|2|x−y|4dxdy . (6) Here, denotes the Fourier-transform ˆf(k)=∫R3e−2πik⋅xf(x)dx . ###### Remarks. i) For many physical states the contribution of the last two terms in (5) is small compared with the contribution of the first term. See, e.g., the Appendix in [2]; ii) For the second equality in (6) see, e.g., [15], Section 7.12, equation (4), p. 184; iii) It was already noticed by Lieb and Oxford (see the remark after equation (26), p. 261 on [16]), that somehow for uniform densities the Lieb–Oxford constant should be instead of ; iv) In the same vein, J. P. Perdew [23], by employing results for a uniform electron gas in its low density limit, showed that in the Lieb–Oxford bound one ought to have (see also [11]). After the work of Lieb and Oxford [16] many people have considered bounds on the indirect Coulomb energy in lower dimensions (in particular see, e.g., [10] for the one-dimensional case; [18], [21], [25], and [26] for the two-dimensional case, which is important for the study of quantum dots). Recently, Benguria, Gallegos, and Tušek [4] gave an alternative to the Lieb–Solovej–Yngvason bound [18], with a constant much closer to the numerical values proposed in [26] (see also the references therein) to the expense of adding a gradient term: ###### Theorem 1.2 (Estimate on the indirect Coulomb energy for two dimensional atoms [4]). Let be normalized to one and symmetric (or antisymmetric) in all its variables. Define ρψ(x)=N∫R2(N−1)|ψ|2(x,x2,…,xN) dx2…dxN. If and , then, for all , E(ψ)≡⟨ψ,N∑i with β=(43)3/2√5π−1≃5.9045. (8) ###### Remarks. i) The constant in (7) is substantially lower than the constant found in [18] (see equation (5.24) of lemma 5.3 in [18]). ii) Moreover, the constant is close to the numerical values (i.e., ) of [25] (and references therein), but is not sharp. In the literature there are, so far, three approaches to prove lower bounds on the exchange energy, namely: i) The approach introduced by E.H. Lieb in 1979 [13], which uses as the main tool the Hardy–Littlewood Maximal Function [27]. This method was used in the first bound of Lieb [13]. Later it was used in [18] to obtain a lower bound on the exchange energy of two–dimensional Coulomb systems. It has the advantage that it may be applied in a wide class of problems, but it does not yield sharp constants. ii) The use of Onsager’s electrostatic inequality [22] together with localization techniques, introduced by Lieb and Oxford [16]. This method yields very sharp constants. It was used recently in [2] to get a new type of bounds including gradient terms (for three dimensional Coulomb systems). In some sense the constant recently obtained in [2] is the best possible (see the comments after Theorem 1.1). The only disadvantage of this approach is that it depends on the use of Onsager’s electrostatic inequality (which in turn relies on the fact that the Coulomb potential is the fundamental solution of the Laplacian). Because of this, it cannot be used in the case of two–dimensional atoms, because is not the fundamental solution of the two–dimensional Laplacian. iii) The use of the stability of matter of an auxiliary many particle system. This idea was used by Lieb and Thirring [19] to obtain lower bounds on the kinetic energy of a systems of electrons in terms of the single particle density. In connection with the problem of getting lower bounds on the exchange energy it was used for the first time in [4], to get a lower bound on the exchange energy of two–dimensional Coulomb systems including gradient terms. This method provides very good, although not sharp, constants. As we mentioned above, during the last twenty years there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradients of the single particle density. This interest arises with the expectation that states with a relatively small kinetic energy have a smaller indirect part (see, e.g., [11, 24, 28] and references therein). While the form of leading term (i.e., the dependence as an integral of in three dimensions or as an integral of in two dimensions) is dictated by Dirac’s argument (using plane waves), there is no such a clear argument, nor a common agreement concerning the structure of the gradient corrections. The reason we introduced the particular gradient term, in our earlier work [4], was basically due to the fact that we already knew the stability of matter arguments for the auxiliary system. However, there is a whole one parameter family of such gradient terms that can be dealt in the same manner. In this manuscript we obtain lower bounds including as gradient terms this one–parameter family. One interesting feature of our bounds is that the constant in front of the leading term remains the same (i.e., its value is independent of the parameter that labels the different possible gradient terms), while the constant in front of the gradient term is parameter dependent. Our main result is the following theorem. ###### Theorem 1.3 (Estimate on the indirect Coulomb energy for two dimensional atoms). Let , and . Assume, and . Let , for while , for . Then, for all we have, E(ψ)≡⟨ψ,N∑i Here, ~b2=(43)3/2√5π−1(1+ϵ)=β(1+ϵ) (10) where is the same constant that appears in (8). Also, ~a2=2γC(γ)3−γ(1βϵγ−13−γC(γγ−1))γ−1. (11) In particular, we have (with a fixed ) ~a2|γ→1+=√2. ###### Remarks. i) Our previous Theorem 1.2 is a particular case of Theorem 1.3, for the value , . ii) Notice that is independent of , and it is therefore the same as in [4]. iii)The constant in front of the gradient term depends on the power and, of course, on . However, as , this constant converges to independently of the value of . In the rest of the manuscript we give a sketch of the proof of this theorem, which follows closely the proof of the particular result 1.2 in [4]. ## 2. Auxiliary lemmas First we need a standard convexity result. ###### Lemma 2.1. Let , and . Then |x|p+|y|p≤C(p)|x+iy|p, where for , and for . The constant is sharp. ###### Proof. If , the assertion follows, e.g., from the fact that -norm is decreasing in . On the other hand, for , the assertion follows from the concavity of the mapping for and . ∎ The next lemma is a generalization of the analogous result introduced in [3] and used in the proof of Theorem 1.2 above (see [4]). This lemma is later needed to prove a Coulomb Uncertainty Principle. ###### Lemma 2.2. Let stands for the disk of radius and origin . Moreover let be a smooth function such that and . Then the following uncertainty principle holds ∣∣∣∫DR[2u(|x|)+|x|u′(|x|)]f(x)1/α∣∣∣≤≤1α(C(γ)∫DR|∇f(x)|γdx)1/γ(C(δ)∫DR|x|δ|u(|x|)|δ|f(x)|3/(2α)dx)1/δ, where 1α=2γ3−γ,1γ+1δ=1. (12) ###### Proof. Set . Then we have, ∫DR[2u(|x|)+|x|u′(|x|)]f(x)1/αdx=2∑j=1∫DR[∂jgj(x)]f(x)1/αdx==∑j∫DRf(x)∂j[gj(x)f(x)1/α−1]dx−(1α−1)∑j∫DRf(x)1/α−1gj(x)∂jf(x)dx==−1α∫DR⟨∇f(x),x⟩u(|x|)f(x)1/α−1dx. In the last equality we integrated by parts and made use of the fact that vanishes on the boundary . Next, the Hölder inequality implies ∣∣∣∫DR[2u(|x|)+|x|u′(|x|)]f(x)1/α∣∣∣≤1α(∫DR2∑j=1|∂jf(x)|γdx)1/γ(∫DR2∑j=1|xj|δ|u(|x|)|δ|f(x)|(1/α−1)δdx)1/δ. The rest follows from Lemma 2.1. ∎ ## 3. A stability result for an auxiliary two-dimensional molecular system Here we follow the method introduced in [4]. That is, in order to prove our Lieb–Oxford type bound (with gradient corrections) in two dimensions we use a stability of matter type result on an auxiliary molecular system. This molecular system is an extension of the one studied in [4], which was adapted from the similar result in three dimensions discussed in [3] (this last one corresponds to the zero mass limit of the model introduced in [7, 8, 9]). We begin with a typical Coulomb Uncertainty Principle which uses the kinetic energy of the electrons in a ball to bound the Coulomb singularities. ###### Theorem 3.1. For every smooth non-negative function on the closed disk , and for any we have abα∣∣ ∣∣∫DR(1|x|−2R)ρ(x)dx∣∣ ∣∣≤aγC(γ)γ∫DR|∇ρ(x)α|γdx+bδC(δ)δ∫DRρ3/2dx, where , and and are as in (12). ###### Proof. In Lemma 2.2 we set and . The assertion of the theorem then follows from Young inequality with coefficients and . ∎ And now we introduce the auxiliary molecular system through the “energy functional” ξ(ρ)=~a2∫R2|∇ρα|γdx+~b2∫R2ρ3/2dx−∫R2V(x)ρ(x)dx+D(ρ,ρ)+U, (13) where V(x)=K∑i=1z|x−Ri|,D(ρ,ρ)=12∫R2×R2ρ(x)1|x−y|ρ(y)dxdy,U=∑1≤i with and . As above we assume , and . The choice of (in terms of ) is made in such a way that the first two terms in (13) scale as one over a length. Indeed, let us denote K(ρ)≡~a2∫R2|∇ρα|γdx+~b2∫R2ρ3/2dx. Given any trial function and setting (thus preserving the norm), it is simple to see that with our choice of we have . If we now introduce constants so that ~a2=aγC(γ)2αγ (14) ~b2=bδ2C(δ)2αδ+b21 (again with given by (12)), we may use the proof of [4, Lemma 2.5] step by step. In particular, ξ(ρ)≥b21∫R2ρ3/2dx−∫R2Vρ dx+ab2K∑j=1∫Bj(12|x−Rj|−1Dj)ρ(x)dx+D(ρ,ρ)+U, where Dj=12min{|Rk−Rj|∣∣k≠j}, and is a disk with center and of radius . Thus as in [4, Lemma 2.5] we have that, for z≤ab2/2, (15) it holds ξ(ρ)≥K∑j=11Dj[z28−427b41(2z3(π−1)+πa3b32)]. (16) Consequently we arrive at the following theorem. ###### Theorem 3.2. For all non-negative functions such that and , we have that ξ(ρ)≥0, (17) provided that z≤maxσ∈(0,1)h(σ) (18) h(σ)=min⎧⎨⎩a2(~b23−γγ−1C(γγ−1)−1(1−σ))(γ−1)/γ,2764~b45π−1σ2⎫⎬⎭, (19) with given by (14). In order to arrive at (19) we set in (16) to be the smallest possible under the condition (15), i.e., , and we introduced . ## 4. Proof of Theorem 1.3 In this Section we give the proof of the main result of this paper, namely Theorem 1.3. We use an idea introduced by Lieb and Thirring in 1975 in their proof of the stability of matter [19] (see also the review article [12] and the recent monograph [17]). This idea was first used in this context in [4]. ###### Proof of Theorem 1.3. Consider the inequality (17), with (where is the number of electrons in our original system), (i.e., the charge of the electrons), and (for all ). With this choice, according to (18), the inequality (17) is valid as long as and (that are now free parameters) satisfy the constraint, 1≤maxσ∈(0,1)h(σ) (20) with (which maximizes ) such that . Let us introduce and set . Then the smallest such that the assumptions of Theorem 3.2 may be in principle fulfilled reads ~b2=(43)3/2√5π−1(1+ϵ). (21) Hence has to be chosen large enough, namely such that 1=a2(~b23−γγ−1C(γγ−1)−1ϵ1+ϵ)(γ−1)/γ, which due to (14) implies ~a2=2γC(γ)3−γ((34)3/2(5π−1)−1/21ϵγ−13−γC(γγ−1))γ−1. (22) Since limγ→1+C(γ)=√2,limγ→1+(γ−13−γC(γγ−1))γ−1=1, we have (with a fixed ) ~a2|γ→1+=√2. Then take any normalized wavefunction , and multiply (17) by and integrate over all the electronic configurations, i.e., on . Moreover, take . We get at once, E(ψ)≡⟨ψ,N∑i provided and satisfy (22) and (21), respectively. ∎ ###### Remark 4.1. In general the two integral terms in (9) are not comparable. If one takes a very rugged , normalized to , the gradient term may be very large while the other term can remain small. However, if one takes a smooth , the gradient term can be very small as we illustrate in the example below. Let us denote L(ρ)=∫R2ρ(x)3/2dx and G(ρ)=∫R2(|∇ρ(x)α|)γdx. with . We will evaluate them for the normal distribution ρ(|x|)=Ce−A|x|2 where . Some straightforward integration yields L=C3/22π3A, while, G=Cαγπ2γ(Aα)(γ/2)−1Γ(1+γ2)γ−(γ/2)−1. With , ∫R2ρ(|x|)dx=N, and we have GL=3(√2γ)γ(πN)γ/2Γ(1+γ2)(3−γ)(γ/2)−1, i.e., in the “large number of particles” limit, the term becomes negligible, for all . ## Acknowledgments It is a pleasure to dedicate this manuscript to Elliott Lieb on his eightieth birthday. The scientific achievements of Elliott Lieb have inspired generations of Mathematical Physicists. This work has been supported by the Iniciativa Cient fica Milenio, ICM (CHILE) project P07–027-F. The work of RB has also been supported by FONDECYT (Chile) Project 1100679. The work of MT has also been partially supported by the grant 201/09/0811 of the Czech Science Foundation. ## References • [1] R. D. Benguria, Density Functional Theory, in Encyclopedia of Applied and Computational mathematics (B. Engquist, et al, Eds.), Springer-Verlag, Berlin, 2013. • [2] R. D. Benguria, G. A. Bley, and M. Loss, An improved estimate on the indirect Coulomb Energy, International Journal of Quantum Chemistry 112, 1579–1584 (2012). • [3] R. D. Benguria, M. Loss, and H. Siedentop, Stability of atoms and molecules in an ultrarelativistic Thomas–Fermi–Weizsäcker model, J. Math. Phys. 49, article 012302 (2008). • [4] R. D. Benguria, P. Gallegos, and M. Tušek, New Estimate on the Two-Dimensional Indirect Coulomb Energy, Annales Henri Poincaré (2012). • [5] G. K.–L. Chan and N. C. Handy, Optimized Lieb–Oxford bound for the exchange–correlation energy, Phys. Rev. A 59, 3075–3077 (1999). • [6] P. A. M. Dirac, Note on Exchange Phenomena in the Thomas Atom, Mathematical Proceedings of the Cambridge Philosophical Society, 26, 376–385 (1930). • [7] E. Engel, Zur relativischen Verallgemeinerung des TFDW modells, Ph.D. Thesis Johann Wolfgang Goethe Universität zu Frankfurt am Main, 1987. • [8] E. Engel and R. M. Dreizler, Field–theoretical approach to a relativistic Thomas–Fermi–Weizsäcker model, Phys. Rev. A 35, 3607–3618 (1987). • [9] E. Engel and R. M. Dreizler, Solution of the relativistic Thomas–Fermi–Dirac–Weizsäcker model for the case of neutral atoms and positive ions, Phys. Rev. A 38, 3909–3917 (1988). • [10] C. Hainzl and R. Seiringer, Bounds on One–dimensional Exchange Energies with Applications to Lowest Landau Band Quantum Mechanics, Letters in Mathematical Physics 55, 133–142 (2001). • [11] M. Levy and J. P. Perdew, Tight bound and convexity constraint on the exchange–correlation–energy functional in the low–density limit, and other formal tests of generalized–gradient approximations, Physical Review B 48, 11638–11645 (1993). • [12] E. H. Lieb, The stability of matter, Rev. Mod. Phys. 48, 553–569 (1976). • [13] E. H. Lieb, A Lower Bound for Coulomb Energies, Physics Letters 70 A, 444–446 (1979). • [14] E. H. Lieb, Thomas–Fermi and related theories of Atoms and Molecules, Rev. Mod. Phys. 53, 603–641 (1981). • [15] E. H. Lieb and M. Loss, Analysis, Second Edition, Graduate Texts in Mathematics, vol. 14, Amer. Math. Soc., RI, 2001. • [16] E. H. Lieb and S. Oxford, Improved Lower Bound on the Indirect Coulomb Energy, International Journal of Quantum Chemistry 19, 427–439 (1981). • [17] E. H. Lieb and R. Seiringer, The Stability of Matter in Quantum Mechanics, Cambridge University Press, Cambridge, UK, 2009. • [18] E. H. Lieb, J. P. Solovej, and J. Yngvason, Ground States of Large Quantum Dots in Magnetic Fields, Physical Review B 51, 10646–10666 (1995). • [19] E. H. Lieb and W. Thirring, Bound for the Kinetic Energy of Fermions which Proves the Stability of Matter, Phys. Rev. Lett. 35, 687–689 (1975); Errata 35, 1116 (1975). • [20] J. D. Morgan III, Thomas–Fermi and other density functional theories, in Springer handbook of atomic, molecular, and optical physics, vol. 1, pp. 295–306, edited by G.W.F. Drake, Springer–Verlag, NY, 2006. • [21] P.–T. Nam, F. Portmann, and J. P. Solovej, Asymptotics for two dimensional Atoms, preprint, 2011. • [22] L. Onsager, Electrostatic Interactions of Molecules, J. Phys. Chem. 43 189–196 (1939). [Reprinted in The collected works of Lars Onsager (with commentary), World Scientific Series in 20 Century Physics, vol. 17, pp. 684–691, Edited by P.C. Hemmer, H. Holden and S. Kjelstrup Ratkje, World Scientific Pub., Singapore, 1996.] • [23] J. P. Perdew, Unified Theory of Exchange and Correlation Beyond the Local Density Approximation, in Electronic Structure of Solids ’91, pp. 11–20, edited by P. Ziesche and H. Eschrig, Akademie Verlag, Berlin, 1991. • [24] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Letts. 77, 3865–3868 (1996). • [25] E. Räsänen, S. Pittalis, K. Capelle, and C. R. Proetto, Lower bounds on the Exchange–Correlation Energy in Reduced Dimensions, Phys. Rev. Letts. 102, article 206406 (2009). • [26] E. Räsänen, M. Seidl, and P. Gori–Giorgi, Strictly correlated uniform electron droplets, Phys. Rev. B 83, article 195111 (2011). • [27] E. M. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, Princeton, NJ, 1971. • [28] A. Vela, V. Medel, and S. B. Trickey, Variable Lieb–Oxford bound satisfaction in a generalized gradient exchange–correlation functional, The Journal of Chemical Physics 130, 244103 (2009).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742133617401123, "perplexity": 585.5434093228754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00119.warc.gz"}
https://puzzling.stackexchange.com/questions/65800/hey-wake-up-look-at-this-grid-puzzle
# Hey! Wake Up! Look At This Grid Puzzle! The grid is divided into rooms, though some are wibbly-wobbly weird-shaped. Hey, aren't these normally supposed to be blocks? What should the title of this puzzle really be? B S G N I L H I A I O L F F M T R F A R I O R R C J N E S L A T B B I Q C N A T X A R O N O V I F L O W F B U N Y H J S W R S S J U I M X E I O N T R K A W P R V Q F P O D L T R U E F B Z F I Z C N T Special thanks to @Deusovi for testsolving this! Wacky Waving Step 1: Solve the grid as a Heyawake. Overlaying this with the letter grid reveals: shift first in row by sum in rpplffct Step 2: Solving the grid again, this time as a Ripple effect. Then shifting the first row by the sum of all numbers in that row reveals: Z to Z course Step 3: Plotting a course from Z to Z gives:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339840412139893, "perplexity": 4211.587109152038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00408.warc.gz"}
http://tex.stackexchange.com/questions/76152/beamer-definition-list-overlay-uncover-definition-later-than-entry?answertab=active
# beamer definition-list overlay: uncover definition later than entry For educational purposes, I would like to uncover the definitions of my description items only after all the items themselves have been uncovered, like so: \documentclass{beamer} \begin{document} \begin{frame} \begin{overprint} \begin{description} \item<1->[Spam] \onslide<3->{Eggs} \item<2->[Cheese] \onslide<4->{Tofu} \end{description} \end{overprint} \end{frame} \end{document} However, this has the effect that both Spam and Eggs appear only on slide 3. I would like that in slide 1, I have Spam (but without any content), and in slide 3, I have Spam and Eggs. How can I achieve this? An alternative would be to use a tabular environment, but I'm interested to see if it can be achieved with a description. - Something like this works: \documentclass{beamer} \newcommand\desctext[1]{% \only<+(1)>{\mbox{}}% \onslide<+(1)->{#1}} \begin{document} \begin{frame} \begin{overprint} \begin{description} \item[Spam1]\desctext{Eggs1} \item[Spam2]\desctext{Eggs2} \item[Spam3]\desctext{Eggs3} \item[Spam4]\desctext{Eggs4} \end{description} \end{overprint} \end{frame} \end{document} For the ordering required (all labels first, then all descriptions), something like this can be done: \documentclass{beamer} \newcommand\desctext[2][]{% \only<+(1)->{\mbox{}}% \onslide<#1->{#2}} \begin{document} \begin{frame} \begin{overprint} \begin{description} \item[Spam1]\desctext[5]{Eggs1}% add one to the number of items \item[Spam2]\desctext[6]{Eggs2} \item[Spam3]\desctext[7]{Eggs3} \item[Spam4]\desctext[8]{Eggs4} \end{description} \end{overprint} \end{frame} \end{document} - Almost; in fact, I like to first have all the items appear, then all the descriptions. And I might want a finer-grained control in any case. But the \mbox{} does the trick to solve the problem. –  gerrit Oct 10 '12 at 17:15 @gerrit I updated my answer with a posibble definition for this new ordering; perhaps it can be simplified a little to elliminate the optional argument. –  Gonzalo Medina Oct 10 '12 at 17:29 Personally I use itemize environment to achieve your goal. description environment seems to behave differently from itemize and enumerate environment for "complicated" constructions. \documentclass{beamer} \begin{document} \begin{frame} \begin{overprint} \begin{itemize} \item<1-> Spam: \onslide<2->{Eggs} \end{itemize} \end{overprint} \end{frame} \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594213366508484, "perplexity": 1714.3372562995555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986625.58/warc/CC-MAIN-20150728002306-00045-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-1-section-1-7-scientific-notation-exercise-set-page-89/42
## Intermediate Algebra for College Students (7th Edition) $6 \times 10^5$ Divide the corresponding parts. Then, put the expression in the form $a*10^n$, where a is a number that is greater than or equal to 1 but less than 10. $=\dfrac{1.2}{2} \times \dfrac{10^4}{10^{-2}} \\=0.6 \times 10^{4-(-2)} \\=0.6 \times 10^{4+2} \\=0.6 \times 10^{6} \\=6 \times 10^{6-1} \\=6 \times 10^5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409254193305969, "perplexity": 179.3523456486135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00055.warc.gz"}
https://dml.cz/handle/10338.dmlcz/146752
# Article Full entry | PDF   (0.2 MB) Keywords: Itô functional difference equation; stability of solutions; admissibility of spaces Summary: The admissibility of spaces for Itô functional difference equations is investigated by the method of modeling equations. The problem of space admissibility is closely connected with the initial data stability problem of solutions for Itô delay differential equations. For these equations the $p$-stability of initial data solutions is studied as a special case of admissibility of spaces for the corresponding Itô functional difference equation. In most cases, this approach seems to be more constructive and expedient than other traditional approaches. For certain equations sufficient conditions of solution stability are given in terms of parameters of those equations. References: [1] Andrianov, D. L.: Boundary value problems and control problems for linear difference systems with aftereffect. Russ. Math. 37 (1993), 1-12; translation from Izv. Vyssh. Uchebn. Zaved. Mat. {\it 5} (1993), 3-16. MR 1265616 | Zbl 0836.34087 [2] Azbelev, N. V., Simonov, P. M.: Stability of Differential Equations with Aftereffect. Stability and Control: Theory, Methods and Applications 20. Taylor and Francis, London (2003). MR 1965019 | Zbl 1049.34090 [3] Elaydi, S.: Periodicity and stability of linear Volterra difference systems. J. Math. Anal. Appl. 181 (1994), 483-492. DOI 10.1006/jmaa.1994.1037 | MR 1260872 | Zbl 0796.39004 [4] Elaydi, S., Zhang, S.: Stability and periodicity of difference equations with finite delay. Funkc. Ekvacioj, Ser. Int. 37 (1994), 401-413. MR 1311552 | Zbl 0819.39006 [5] Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. North-Holland Mathematical Library 24. North-Holland Publishing, Amsterdam; Kodansha Ltd., Tokyo (1981). MR 0637061 | Zbl 0495.60005 [6] Kadiev, R.: Sufficient stability conditions for stochastic systems with aftereffect. Differ. Equations 30 (1994), 509-517; translation from Differ. Uravn. {\it 30} (1994), 555-564. MR 1299841 | Zbl 0824.93069 [7] Kadiev, R.: Stability of solutions of stochastic functional differential equations. Doctoral dissertation, DSc Habilitation thesis, Makhachkala (2000) (in Russian). [8] Kadiev, R., Ponosov, A. V.: Stability of linear stochastic functional-differential equations under constantly acting perturbations. Differ. Equations 28 (1992), 173-179; translation from Differ. Uravn. {\it 28} (1992), 198-207. MR 1184920 | Zbl 0788.60071 [9] Kadiev, R., Ponosov, A. V.: Relations between stability and admissibility for stochastic linear functional differential equations. Func. Diff. Equ. 12 (2005), 209-244. MR 2137849 | Zbl 1093.34046 [10] Kadiev, R., Ponosov, A. V.: The $W$-transform in stability analysis for stochastic linear functional difference equations. J. Math. Anal. Appl. 389 (2012), 1239-1250. DOI 10.1016/j.jmaa.2012.01.003 | MR 2879292 | Zbl 1248.93168 Partner of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908149778842926, "perplexity": 2898.6977350869147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00469.warc.gz"}
https://rd.springer.com/article/10.1186/1687-1847-2011-63
, 2011:63 # Stability criteria for linear Hamiltonian dynamic systems on time scales Open Access Research ## Abstract In this article, we establish some stability criteria for the polar linear Hamiltonian dynamic system on time scales by using Floquet theory and Lyapunov-type inequalities. 2000 Mathematics Subject Classification: 39A10. ### Keywords Hamiltonian dynamic system Lyapunov-type inequality Floquet theory stability time scales ## 1 Introduction A time scale is an arbitrary nonempty closed subset of the real numbers ℝ. We assume that is a time scale. For , the forward jump operator is defined by , the backward jump operator is defined by , and the graininess function is defined by μ(t) = σ(t) - t. For other related basic concepts of time scales, we refer the reader to the original studies by Hilger [1, 2, 3], and for further details, we refer the reader to the books of Bohner and Peterson [4, 5] and Kaymakcalan et al. [6]. Definition 1.1. If there exists a positive number ω ∈ ℝ such that for all and n∈ ℤ, then we call a periodic time scale with period ω. Suppose is a ω-periodic time scale and . Consider the polar linear Hamiltonian dynamic system on time scale (1.1) where α(t), β(t) and γ(t) are real-valued rd-continuous functions defined on . Throughout this article, we always assume that (1.2) and (1.3) For the second-order linear dynamic equation (1.4) if let y(t) = p(t)xΔ (t), then we can rewrite (1.4) as an equivalent polar linear Hamiltonian dynamic system of type (1.1): (1.5) where p(t) and q(t) are real-valued rd-continuous functions defined on with p(t) > 0, and Recently, Agarwal et al. [7], Jiang and Zhou [8], Wong et al. [9] and He et al. [10] established some Lyapunov-type inequalities for dynamic equations on time scales, which generalize the corresponding results on differential and difference equations. Lyapunov-type inequalities are very useful in oscillation theory, stability, disconjugacy, eigenvalue problems and numerous other applications in the theory of differential and difference equations. In particular, the stability criteria for the polar continuous and discrete Hamiltonian systems can be obtained by Lyapunov-type inequalities and Floquet theory, see [11, 12, 13, 14, 15, 16]. In 2000, Atici et al. [17] established the following stablity criterion for the second-order linear dynamic equation (1.4): Theorem 1.2 [17]. Assume p(t) > 0 for , and that (1.6) If (1.7) and (1.8) then equation (1.4) is stable, where (1.9) where and in the sequel, system (1.1) or Equation (1.4) is said to be unstable if all nontrivial solutions are unbounded on ; conditionally stable if there exist a nontrivial solution which is bounded on ; and stable if all solutions are bounded on . In this article, we will use the Floquet theory in [18, 19] and the Lyapunov-type inequalities in [10] to establish two stability criteria for system (1.1) and equation (1.4). Our main results are the following two theorems. Theorem 1.3. Suppose (1.2) and (1.3) hold and (1.10) Assume that there exists a non-negative rd-continuous function θ (t) defined on such that (1.11) (1.12) and (1.13) Then system (1.1) is stable. Theorem 1.4. Assume that (1.6) and (1.7) hold, and that (1.14) Then equation (1.4) is stable. Remark 1.5. Clearly, condition (1.14) improves (1.8) by removing term p0. We dwell on the three special cases as follows: 1. 1. If , system (1.1) takes the form: (1.15) In this case, the conditions (1.12) and (1.13) of Theorem 1.3 can be transformed into (1.16) and (1.17) Condition (1.17) is the same as (3.10) in [12], but (1.11) and (1.16) are better than (3.9) in [12] by taking θ (t) = |α (t)|/β (t). A better condition than (1.17) can be found in [14, 15]. 2. 2. If , system (1.1) takes the form: (1.18) In this case, the conditions (1.11), (1.12), and (1.13) of Theorem 1.3 can be transformed into (1.19) (1.20) and (1.21) Conditions (1.19), (1.20), and (1.21) are the same as (1.17), (1.18) and (1.19) in [16], i.e., Theorem 1.3 coincides with Theorem 3.4 in [16]. However, when p(n) and q(n) are ω-periodic functions defined on , the stability conditions (1.22) in Theorem 1.4 are better than the one (1.23) in [16, Corollary 3.4]. More related results on stability for discrete linear Hamiltonian systems can be found in [20, 21, 22, 23, 24]. 3. 3. Let δ > 0 and N ∈ {2, 3, 4, ...}. Set ω = δ + N, define the time scale as follows: (1.24) Then system (1.1) takes the form: (1.25) and (1.26) In this case, the conditions (1.11), (1.12), and (1.13) of Theorem 1.3 can be transformed into (1.27) (1.28) and (1.29) 2 Proofs of theorems Let u(t) = (x(t), y(t)), u σ (t) = (x(σ(t)), y(t)) and Then, we can rewrite (1.1) as a standard linear Hamiltonian dynamic system (2.1) Let u1(t) = (x10(t), y10(t)) and u2(t) = (x20(t), y20(t)) be two solutions of system (1.1) with (u1(0), u2(0)) = I2. Denote by Φ(t) = (u1(t), u2(t)). Then Φ(t) is a fundamental matrix solution for (1.1) and satisfies Φ(0) = I2. Suppose that α(t), β(t) and γ(t) are ω-periodic functions defined on (i.e. (1.10) holds), then Φ(t + ω) is also a fundamental matrix solution for (1.1) ( see [18]). Therefore, it follows from the uniqueness of solutions of system (1.1) with initial condition ( see [9, 18, 19]) that (2.2) From (1.1), we have (2.3) It follows that det Φ(t) = det Φ(0) = 1 for all . Let λ1 and λ2 be the roots (real or complex) of the characteristic equation of Φ(ω) which is equivalent to (2.4) where Hence (2.5) Let v1 = (c11, c21) and v2 = (c12, c22) be the characteristic vectors associated with the characteristic roots λ1 and λ2 of Φ(ω), respectively, i.e. (2.6) Let v j (t) = Φ(t)v j , j = 1, 2. Then it follows from (2.2) and (2.6) that (2.7) On the other hand, it follows from (2.1) that (2.8) This shows that v1(t) and v2(t) are two solutions of system (1.1) which satisfy (2.7). Hence, we obtain the following lemma. Lemma 2.1. Let Φ(t) be a fundamental matrix solution for (1.1) with Φ(0) = I2, and let λ1 and λ2 be the roots (real or complex) of the characteristic equation (2.4) of Φ(ω). Then system (1.1) has two solutions v1(t) and v2(t) which satisfy (2.7). Similar to the continuous case, we have the following lemma. Lemma 2.2. System (1.1) is unstable if |H| > 2, and stable if |H| < 2. Instead of the usual zero, we adopt the following concept of generalized zero on time scales. Definition 2.3. A function is said to have a generalized zero at provided either f(t0) = 0 or f(t0)f(σ(t0)) < 0. Lemma 2.4. [4] Assume are differential at . If fΔ(t) exists, then f(σ(t)) = f(t) + μ(t)fΔ(t). Lemma 2.5. [4] (Cauchy-Schwarz inequality). Let . For f,gC rd we have The above inequality can be equality only if there exists a constant c such that f(t) = cg(t) for . Lemma 2.6. Let v1(t) = (x1(t), y1(t)) and v2(t) = (x2(t), y2(t)) be two solutions of system (1.1) which satisfy (2.7). Assume that (1.2), (1.3) and (1.10) hold, and that exists a non-negative function θ(t) such that (1.11) and (1.12) hold. If H2 ≥ 4, then both x1(t) and x2(t) have generalized zeros in . Proof. Since |H| ≥ 2, then λ1 and λ2 are real numbers, and v1(t) and v2(t) are also real functions. We only prove that x1(t) must have at least one generalized zero in . Otherwise, we assume that x1(t) > 0 for and so (2.7) implies that x1(t) > 0 for . Define z(t): = y1(t)/x1(t). Due to (2.7), one sees that z(t) is ω-periodic, i.e. z(t + ω) = z(t), . From (1.1), we have (2.9) From the first equation of (1.1), and using Lemma 2.4, we have (2.10) Since x1(t) > 0 for all , it follows from (1.2) and (2.10) that (2.11) which yields (2.12) Substituting (2.12) into (2.9), we obtain (2.13) If β(t) > 0, together with (1.11), it is easy to verify that (2.14) If β(t) = 0, it follows from (1.11) that α(t) = 0, hence (2.15) Combining (2.14) with (2.15), we have (2.16) Substituting (2.16) into (2.13), we obtain (2.17) Integrating equation (2.17) from 0 to ω, and noticing that z(t) is ω-periodic, we obtain Lemma 2.7. Let v1(t) = (x1(t), y1(t)) and v2(t) = (x2(t), y2(t)) be two solutions of system (1.1) which satisfy (2.7). Assume that (2.18) (2.19) and (2.20) If H2 ≥ 4, then both x1(t) and x2(t) have generalized zeros in . Proof. Except (1.12), (2.18), and (2.19) imply all assumptions in Lemma 2.6 hold. In view of the proof of Lemma 2.6, it is sufficient to derive an inequality which contradicts (2.20) instead of (1.12). From (2.11), (2.13), and (2.18), we have (2.21) and (2.22) Since z(t) is ω-periodic and γ(t) ≢ 0,, it follows from (2.22) that z2(t) ≢ 0 on . Integrating equation (2.22) from 0 to ω, we obtain Lemma 2.8. [10] Suppose that (1.2) and (1.3) hold and let with σ(a) ≤ b. Assume (1.1) has a real solution (x(t), y(t)) such that x(t) has a generalized zero at end-point a and (x(b), y(b)) = (κ1x(a), κ2y(a)) with and x(t) ≢ 0 on . Then one has the following inequality (2.23) Lemma 2.9. Suppose that (2.18) holds and let with σ(a) ≤ b. Assume (1.1) has a real solution (x(t), y(t)) such that x(t) has a generalized zero at end-point a and (x(b), y(b)) = (κx(a), κy(a)) with 0 < κ2 ≤ 1 and x(t) is not identically zero on . Then one has the following inequality (2.24) Proof. In view of the proof of [10, Theorem 3.5] (see (3.8), (3.29)-(3.34) in [10]), we have (2.25) (2.26) (2.27) and (2.28) where ξ ∈ [0, 1), and (2.29) Let |x(τ*)| = maxσ(a)≤τb|x(τ)|. There are three possible cases: 1. (1) y(t) ≡ y(a) ≠ 0, ; 2. (2) y(t) ≢ y(a), |y(t)| ≡ |y(a)|, ; 3. (3) |y(t)| ≢ |y(a)|, . Case (1). In this case, κ = 1. It follows from (2.25) and (2.26) that which contradicts the assumption that x(b) = κx(a) = x(a). Case (2). In this case, we have (2.30) instead of (2.28). Applying Lemma 2.5 and using (2.27) and (2.30), we have (2.31) Dividing the latter inequality of (2.31) by |x(τ*)|, we obtain (2.32) Case (3). In this case, applying Lemma 2.5 and using (2.27) and (2.28), we have (2.33) Dividing the latter inequality of (2.33) by |x(τ*)|, we also obtain (2.32). It is easy to verify that Substituting this into (2.32), we obtain (2.24). □ Proof of Theorem 1.3. If |H| ≥ 2, then λ1 and λ2 are real numbers and λ1λ2 = 1, it follows that . Suppose . By Lemma 2.6, system (1.1) has a non-zero solution v1(t) = (x1(t), y1(t)) such that (2.7) holds and x1(t) has a generalized zero in , say t1. It follows from (2.7) that (x1(t1 + ω), y1(t1 + ω)) = λ1(x1(t1), y1(t1)). Applying Lemma 2.8 to the solution (x1(t), y1(t)) with a = t1, b = t1 + ω and κ1 = κ2 = λ1, we get (2.34) Next, noticing that for any ω-periodic function f(t) on , the equality holds for all . It follows from (3.1) that (2.35) which contradicts condition (1.13). Thus |H| < 2 and hence system (1.1) is stable. □ Proof of Theorem 1.4. By using Lemmas 2.7 and 2.9 instead of Lemmas 2.6 and 2.8, respectively, we can prove Theorem 1.4 in a similar fashion as the proof of Theorem 1.3. So, we omit the proof here. □ ## Notes ### Acknowledgements The authors thank the referees for valuable comments and suggestions. This project is supported by Scientific Research Fund of Hunan Provincial Education Department (No. 11A095) and partially supported by the NNSF (No: 11171351) of China. ### References 1. 1. Hilger S: Einßmakettenkalk ü l mit Anwendung auf Zentrumsmannigfaltigkeiten. Ph.D. Thesis, Universität Würzburg in German; 1988.Google Scholar 2. 2. Hilger S: Analysis on measure chain--A unified approach to continuous and discrete calculus. Results Math 1990, 18: 18-56. 3. 3. Hilger S: Differential and difference calculus-unified. Nonlinear Anal 1997, 30: 2683-2694. 10.1016/S0362-546X(96)00204-0 4. 4. Bohner M, Peterson A: Dynamic Equations on Time Scales: An Introduction with Applications. Birkhäuser, Boston; 2001. 5. 5. Bohner M, Peterson A: Advances in Dynamic Equations on Time Scales. Birkhäuser Boston, Inc., Boston. MA; 2003. 6. 6. Kaymakcalan B, Lakshimikantham V, Sivasundaram S: Dynamic System on Measure Chains. Kluwer Academic Publishers, Dordrecht; 1996.Google Scholar 7. 7. Agarwal R, Bohner M, Rehak P: Half-linear dynamic equations. Nonlinear Anal Appl 2003, 1: 1-56. 8. 8. Jiang LQ, Zhou Z: Lyapunov inequality for linear Hamiltonian systems on time scales. J Math Anal Appl 310: 579-593.Google Scholar 9. 9. Wong F, Yu S, Yeh C, Lian W: Lyapunov's inequality on timesscales. Appl Math Lett 2006, 19: 1293-1299. 10.1016/j.aml.2005.06.006 10. 10. He X, Zhang Q, Tang XH: On inequalities of Lyapunov for linear Hamiltonian systems on time scales. J Math Anal Appl 2011, 381: 695-705. 10.1016/j.jmaa.2011.03.036 11. 11. Guseinov GSh, Kaymakcalan B: Lyapunov inequalities for discrete linear Hamiltonian systems. Comput Math Appl 2003, 45: 1399-1416. 10.1016/S0898-1221(03)00095-6 12. 12. Guseinov GSh, Zafer A: Stability criteria for linear periodic impulsive Hamiltonian systems. J Math Anal Appl 2007, 335: 1195-1206. 10.1016/j.jmaa.2007.01.095 13. 13. Krein MG: Foundations of the theory of λ-zones of stability of canonical system of linear differential equations with periodic coefficients. memory of A.A. Andronov, Izdat. Acad. Nauk SSSR, Moscow; 1955:413-498.Google Scholar 14. 14. Wang X: Stability criteria for linear periodic Hamiltonian systems. J Math Anal Appl 2010, 367: 329-336. 10.1016/j.jmaa.2010.01.027 15. 15. Tang XH, Zhang M: Lyapunov inequalities and stability for linear Hamiltonian systems. J Diff Equ 2012, 252: 358-381. 10.1016/j.jde.2011.08.002 16. 16. Zhang Q, Tang XH: Lyapunov inequalities and stability for discrete linear Hamiltonian system. Appl Math Comput 2011, 218: 574-582. 10.1016/j.amc.2011.05.101 17. 17. Atici FM, Guseinov GSh, Kaymakcalan B: On Lyapunov inequality in stability theory for Hill's equation on time scales. J Inequal Appl 2000, 5: 603-620. 18. 18. Ahlbrandt CD, Ridenhour J: Floquet theory for time scales and Putzer representations of matrix logarithms. J Diff Equ Appl 2003, 9: 77-92. 19. 19. DaCunha JJ: Lyapunov stability and floquet theory for nonautonomous linear dynamic systems on time scales. Ph.D. dissertation, Baylor University, Waco, Tex, USA; 2004.Google Scholar 20. 20. Halanay A, Răsvan Vl: Stability and boundary value problems, for discrete-time linear Hamiltonian systems. In Dyn Syst Appl. Volume 8. Edited by: Agarwal RP, Bohner M. Special Issue on "Discrete and Continuous Hamiltonian Systems"; 1999:439-459.Google Scholar 21. 21. Răsvan Vl: Stability zones for discrete time Hamiltonian systems. Archivum Mathematicum Tomus 2000, 36: 563-573. (CDDE2000 issue)Google Scholar 22. 22. Răsvan Vl: Krein-type results for λ-zones of stability in the discrete-time case for 2-nd order Hamiltonian systems. Folia FSN Universitatis Masarykianae Brunensis, Mathematica 2002, 10: 1-12. (CDDE2002 issue)Google Scholar 23. 23. Răsvan Vl: On central λ-stability zone for linear discrete time Hamiltonian systems. Proc. fourth Int. Conf. on Dynamical Systems and Differential Equations, Wilmington NC; 2002.Google Scholar 24. 24. Răsvan Vl: On stability zones for discrete time periodic linear Hamiltonian systems. Adv Diff Equ ID80757, pp. 1-13. doi:10.1155/ADE/2006/80757, e-ISSN 1687-1847, 2006Google Scholar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856456518173218, "perplexity": 2781.6222402783064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812756.57/warc/CC-MAIN-20180219151705-20180219171705-00225.warc.gz"}
http://clay6.com/qa/25199/-p-rightarrow-q-is-equivalent-to
# $^{\sim}(p \Rightarrow q)$ is equivalent to $\begin {array} {1 1} (A)\;p \wedge q & \quad (B)\;^{\sim} p V q \\ (C)\;^{\sim} p \wedge ^{\sim}q & \quad (D)\;p \wedge ^{\sim} q \end {array}$ p q $^{\sim}q$ $p \Rightarrow q$ $^{\sim}( p \Rightarrow q)$ $p \wedge ^{\sim}q$ T T F T F F T F T F T T F T F F F F F F T F F F Ans : (D) So, .$^{\sim}(p \Rightarrow q)$ is equivalent to $p \wedge ^{\sim}q$ edited Jan 24, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673270583152771, "perplexity": 105.09187632248124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00169.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/mc1/chapter/7/lesson/7.1.2/problem/7-21
### Home > MC1 > Chapter 7 > Lesson 7.1.2 > Problem7-21 7-21. Use a Giant One to change each of the following fractions to a number written as a fraction over $100$. Then write each portion as a percent. 1.  $\frac { 3 } { 20 }$ $20\left(5\right) = 100$ 1.  $\frac { 3 } { 40 }$ What do you multiply by $40$ to get $100$? Use the number in the Giant One. $\frac{3}{40\ }\cdot$  $=\frac{7.5}{100}=7.5$
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695615172386169, "perplexity": 1464.627116094256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00442.warc.gz"}
https://proofwiki.org/wiki/Completion_of_Normed_Division_Ring
# Completion of Normed Division Ring ## Theorem Let $\struct {R, \norm {\, \cdot \,} }$ be a normed division ring. Then: $\struct {R, \norm {\, \cdot \,} }$ has a normed division ring completion $\struct {R', \norm {\, \cdot \,}' }$ ## Proof Let $d$ be the metric induced by $\struct {R, \norm {\, \cdot \,} }$. Let $\mathcal C$ be the ring of Cauchy sequences over $R$. Let $\mathcal N = \set {\sequence {x_n}: \displaystyle \lim_{n \mathop \to \infty} x_n = 0_R}$. Let $\norm {\, \cdot \,}:\mathcal C \, \big / \mathcal N \to \R_{\ge 0}$ be the norm on the quotient ring $\mathcal C \, \big / \mathcal N$ defined by: $\displaystyle \forall \sequence {x_n} + \mathcal N: \norm {\sequence {x_n} + \mathcal N } = \lim_{n \mathop \to \infty} \norm{x_n}$ Let $d'$ be the metric induced by $\struct {\mathcal C \, \big / \mathcal N, \norm {\, \cdot \,} }$. By Quotient Ring of Cauchy Sequences is Normed Division Ring, $\struct {\mathcal C \, \big / \mathcal N, \norm {\, \cdot \,} }$ is a normed division ring. By Quotient of Cauchy Sequences is Metric Completion, $\struct {\mathcal C \, \big / \mathcal N, d' }$ is the metric completion of $\struct {R, d}$. Let $\phi: R \to \mathcal C \, \big / \mathcal N$ be the mapping from $R$ to the quotient ring $\mathcal C \,\big / \mathcal N$ defined by: $\quad \quad \quad \forall a \in R: \map \phi a = \tuple {a, a, a, \ldots} + \mathcal N$ where $\tuple {a, a, a, \ldots} + \mathcal N$ is the left coset in $\mathcal C \, \big / \mathcal N$ that contains the constant sequence $\tuple {a, a, a, \ldots}$. By Quotient of Cauchy Sequences is Metric Completion, $\map \phi R$ is a dense subset of $\struct {\mathcal C \, \big / \mathcal N, d' }$. By the definition of a normed division ring completion, $\struct {\mathcal C \, \big / \mathcal N, \norm {\, \cdot \,} }$ is a normed division ring completion of $\struct {R, \norm {\, \cdot \,} }$. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956879019737244, "perplexity": 91.69721329983615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00253.warc.gz"}
https://cs.stackexchange.com/questions/41154/can-a-probabilistic-turing-machine-compute-an-uncomputable-number
# Can a probabilistic Turing Machine compute an uncomputable number? Can a probabilistic Turing Machine compute an uncomputable number? My question probably does not make sense, but, that being the case, is there a reasonably simple formal explanation for it. I should add that I am pretty much ignorant of probabilistic TM and randomized algorithms. I looked at wikipedia, but may even have misunderstood what I read. The reason I am asking that is that only the computable numbers can have their digits enumerated by a Turing Machine. But with a probabilistic Turing Machine, I can enumerate any infinite sequence of digits, hence also sequences corresponding to non computable numbers. Actually, since there are only countably many computable numbers, while there are uncountably many reals that can have their digit enumerated, I could say that my probabilistic Turing Machine can be made to enumerate the digits of a non computable number with probability 1. I believe this can only be fallacious, but why? Is there a specific provision in the definition of probabilistic TM that prevents that? Actually, I run into this by thinking whether various computation models can be simulated by a deterministic TM, in question "Are nondeterministic algorithm and randomized algorithms algorithms on a deterministic Turing machine?". Another p[ossibly related question is "Are there any practical differences between a Turing machine with a PRNG and a probabilistic Turing machine?". • What does it mean for a probabilistic Turing machine to compute a number? If I give you a probabilistic Turing machine, can you tell me which number it computes, if any? – Yuval Filmus Apr 8 '15 at 22:51 Consider the following reasonable definition for a Turing machine computing an irrational number in $[0,1]$. A Turing machine computes an irrational $r \in [0,1]$ if, on input $n$, it outputs the first $n$ digits (after the decimal) of the binary representation of $r$. One can think of many extensions of this definition for probabilistic Turing machines. Here is a very permissive one. A probabilistic Turing machine computes an irrational $r \in [0,1]$ if, on input $n$, (1) it outputs the first $n$ digits of $r$ with probability $p$; (2) it outputs any other string with probability less than $p$; (3) it never halts with probability less than $p$. Under this definition, it is not immediately clear whether everything that you can compute is indeed computable (under the sense of the first definition). However, there are some modifications that do allow us to conclude that the resulting number is computable, for example: 1. We can insist that the machine always halt. 2. We can insist that $p > 1/2$. Other modifications are not necessarily enough. For example, does it help if we assume that the non-halting probability tend to $0$ with $n$? Summarizing, it might depend on the model.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684816360473633, "perplexity": 296.3122715989653}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00549.warc.gz"}
http://users.umiacs.umd.edu/~hal/HBC/hbc_v0_1.html
HBC: Hierarchical Bayes Compiler Pre-release version 0.1 HBC is a toolkit for implementing hierarchical Bayesian models. HBC created because I felt like I spend too much time writing boilerplate code for inference problems in Bayesian models. There are several goals of HBC: 1. Allow a natural implementation of hierarchal models. 2. Enable quick and dirty debugging of models for standard data types. 3. Focus on large-dimension discrete models. 4. More general that simple Gibbs sampling (eg., allowing for maximizations, EM and message passing). 5. Allow for hierarchical models to be easily embedded in larger programs. 6. Automatic Rao-Blackwellization (aka collapsing). 7. Allow efficient execution via compilation to other languages (such as C, Java, Matlab, etc.). These goals distinguish HBC from other Bayesian modeling software, such as Bugs (or WinBugs). In particular, our primary goal is that models created in HBC can be used directly, rather than only as a first-pass test. Moreover, we aim for scalability with respect to data size. Finally, since the goal of HBC is to compile hierarchical models into standard programming languages (like C), these models can easily be used as part of a larger system. This last point is in the spirit of the dynamic programming language Dyna. Note that some of these aren't yet supported (in particular: 4 and 6) but should be coming soon! A Quick Example To give a flavor of what HBC is all about, here is a complete implementation of a Bayesian mixture of Gaussians model in HBC format: alpha ~ Gam(10,10) mu_{k} ~ NorMV(vec(0.0,1,dim), 1) , k \in [1,K] si2 ~ IG(10,10) pi ~ DirSym(alpha, K) z_{n} ~ Mult(pi) , n \in [1,N] x_{n} ~ NorMV(mu_{z_{n}}, si2) , n \in [1,N] If you are used to reading hierarchical models, it should be quite clear what this model does. Moreover, by keeping to a very LaTeX-like style, it is quite straightforward to automatically typeset any hierarchical model. If this file were stored in mix_gauss.hier, and if we had data for x stored in a file called X, we could run this model (with two Gaussians) directly by saying: hbc simulate --loadM X x N dim --define K 2 mix_gauss.hier Perhaps closer to my heart would be a six-line implementation of the Latent Dirichlet Allocation model, complete with hyperparameter estimation: alpha ~ Gam(0.1,1) eta ~ Gam(0.1,1) beta_{k} ~ DirSym(eta, V) , k \in [1,K] theta_{d} ~ DirSym(alpha, K) , d \in [1,D] z_{d,n} ~ Mult(theta_{d}) , d \in [1,D] , n \in [1,N_{d}] w_{d,n} ~ Mult(beta_{z_{d,n}}) , d \in [1,D] , n \in [1,N_{d}] This code can either be run directly (eg., by a simulate call as above) or compiled to native C code for (much) faster execuation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376076459884644, "perplexity": 3318.360808015679}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00153.warc.gz"}
https://www.semanticscholar.org/paper/The-Weight-Distribution-of-Quasi-quadratic-Residue-Boston-Hao/e3239244301b92eb8a3187c783dc979426368806
# The Weight Distribution of Quasi-quadratic Residue Codes @article{Boston2017TheWD, title={The Weight Distribution of Quasi-quadratic Residue Codes}, author={Nigel Boston and Jing Hao}, year={2017}, volume={12}, pages={363-385} } • Published 18 May 2017 • Computer Science We investigate a family of codes called quasi-quadratic residue (QQR) codes . We are interested in these codes mainly for two reasons: Firstly, they have close relations with hyperelliptic curves and Goppa's Conjecture, and serve as a strong tool in studying those objects. Secondly, they are very good codes. Computational results show they have large minimum distances when \begin{document}$p\equiv 3 \pmod 8$\end{document} . Our studies focus on the weight distributions of these codes. We will… ## References SHOWING 1-10 OF 23 REFERENCES • D. Joyner • Mathematics, Computer Science Discret. Math. Theor. Comput. Sci. • 2008 This paper investigates how coding theory bounds give rise to bounds such as the following example: for all sufficiently large primes p there exists a subset S ⊂ GF(p) for which the bound |X_S(GF(p))| > 1.39p holds. • Computer Science ArXiv • 2008 It is shown in this paper that, for p=137 A_2m = A_34, the Hamming weight distributions of the binary augmented and extended quadratic residue codes of prime 137 may be obtained with out the need of exhaustive codeword enumeration. • Mathematics, Computer Science IEEE Transactions on Information Theory • 2006 It is argued that by using a slightly more complex group than a cyclic group, namely, the dihedral group, the existence of asymptotically good codes that are invariant under the action of the group on itself can be guaranteed. • Computer Science IEEE Transactions on Information Theory • 2004 We give a lower bound for the minimum distance of double circulant binary quadratic residue codes for primes p/spl equiv//spl plusmn/3(mod8). This bound improves on the square root bound obtained by • I. Duursma • Mathematics, Computer Science Discret. Appl. Math. • 2001 • M. Karlin • Computer Science IEEE Trans. Inf. Theory • 1969 A new circulant echelon canonical form for the perfect (23,12) Golay code is presented and a definite improvement is obtained on the best previously known Bose-Chaudhuri-Hocquenghem cyclic codes. • P. Gaborit • Computer Science J. Comb. Theory, Ser. A • 2002 A generalization of the Pless symmetry codes to different fields is presented and it is proven that the automorphism group of some of these codes contains the group PSL2(q). It is proved that depth-three serially concatenated Turbo codes can be asymptotically good and a sharp O( k−1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight is found. This chapter discusses the many decoding algorithms for codes on curves, as well as the Fourier transform and cyclic codes, and their applications in programming, computer science, and mathematics. This chapter gives constructions of self-dual codes over any Frobenius ring, and describes linear complementary dual codes and makes a new definition of a broad generalization encompassing both self- dual and linear complementary codes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515348672866821, "perplexity": 1685.8725334835024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00411.warc.gz"}
https://geoenergymath.com/2014/07/
# Correlation of time series [mathjax]The Southern Oscillation embedded with the ENSO behavior is what is called a dipole [1], or in other vernacular, a standing wave.  Whenever the atmospheric pressure at Tahiti is high, the pressure at Darwin is low, and vice-versa.  Of course the standing wave is not perfect and far from being a classic sine wave. To characterize the quality of the dipole, we can use a measure such as a correlation coefficient applied to the two time series.  Flipping the sign of Tahiti and applying a correlation coefficient to SOI, we get Figure 1 below: Fig 1 : Anti-correlation between Tahiti and Darwin. The sign of Tahiti is reversed to see better the correlation. The correlation coefficient is calculated to be 0.55 or 55/100. Note that this correlation coefficient is “only” 0.55 when comparing the two time-series, yet the two sets of data are clearly aligned.  What this tells us is that other factors, such as noise in the measurements, can easily drop correlated waveforms well below unity. This is what we have to keep in mind when evaluating correlations of data with models as we can see in the following examples. # Sloshing Animation The models of ENSO for SOI and proxy records apply sloshing dynamics to describe the quasi-periodic behavior. see J. B. Frandsen, “Sloshing motions in excited tanks,” Journal of Computational Physics, vol. 196, no. 1, pp. 53–87, 2004. The following GIF animations are supplementary material from S. S. Kolukula and P. Chellapandi, “Finite Element Simulation of Dynamic Stability of plane free-surface of a liquid under vertical excitation.” Detuning Effect.gif shows the animation of sloshing fluid for the fourth test case, with frequency ratio Ω3 = 0.5 and forcing amplitudeV = 0.2: test case 4 as shown in Figure 4. This case corresponds to instability in the second sloshing mode lying in the first instability region. Figure 8(b) shows the free-surface elevation and Figure 9 shows the moving mesh generated in this case. Dynamic Instability.gif shows the animation of sloshing fluid for the second test case which lies in the unstable region, with frequency ratio Ω1 = 0.5 and forcing amplitude kV =0.3: test case 2 as shown in Figure 4. Figure 6 shows the free-surface elevation and Figure 7 shows the moving mesh generated in this case.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592103123664856, "perplexity": 1100.7634745820835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00261.warc.gz"}
https://www.kitronik.co.uk/blog/how-a-thermistor-works
### My Cart: 0 item(s) - \$0.00 You have no items in your shopping cart. # How A Thermistor Works ## Introduction A thermistor is a component that has a resistance that changes with temperature. There are two types of thermistor. Those with a resistance that increase with temperature (Positive Temperature Coefficient – PTC) and those with a resistance that falls with temperature (Negative Temperature Coefficient – NTC). ## Temperature coefficient Most have a resistance falls as the temperatures increases (NTC). The amount by which the resistance decrease as the temperature decreases is not constant. It varies with temperature. A formula can be used to calculate the resistance of the thermistor at any given temperature. Normally these are calculated for you and the information can be found in the devices datasheet. ## Applications There are many applications for a thermistor. Three of the most popular are listed below. ### Temperature sensing The most obvious application for a thermistor is to measure temperature. They are used to do this in a wide range of products such as thermostats. ### In rush current limiting In this application the thermistor is used to initially oppose the flow of current (by having a high resistance) into a circuit. Then as the thermistor warms up (due to the flow of electricity through the device) it resistance drops letting current flow more easily. ### Circuit protection In this application the thermistor is used to protect a circuit by limiting the amount of current that can flow into it. I’ve too much current starts to flow into a circuit through the thermistor this causes the thermistor to warm up. This in turn increases the resistance of the thermistor reducing the current that can flow into the circuit. ### Example The circuit shown bloew shows a simple way of constructing a circuit that turns on when it goes hot. The decrease in resistance of the thermistor in relation to the other resistor which is fixed as the temperature rises will cause the transistor to turn on. The value of the fixed resistor will depend on the thermistor used, the transistor used and the supply voltage.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221988081932068, "perplexity": 741.1282911254594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295993.24/warc/CC-MAIN-20150323172135-00155-ip-10-168-14-71.ec2.internal.warc.gz"}
https://economics.stackexchange.com/questions/25043/augmented-gravity-model
# Augmented Gravity Model I am currently using the gravity model for my dissertation on migration flows. Do gravity models need to be augmented by dummy variables only or can other explanatory variables (such as the unemployment rate in the destination/ origin country) be included please? All your feedback is greatly appreciated. \begin{align*} M_{ij} = &\beta_0 \times log(g) + \beta_1 \times log(P_i) + \beta_2 \times log(P_j) + \beta_3 \times log(X_i) + \\ &\beta_4 \times log(X_j) + \beta_5 \times log(D_{ij}) + \varepsilon_{ij} \end{align*} where $$X_i$$ is a vector of explanatory variables describing different features of the origin (i.e. push factors) and $$X_j$$ is a vector of explanatory variables describing features of the destination (i.e. pull factors). Push factors are those characteristics of the origin place that encourage (discourage) out-migration (in-migration), such as low incomes, high unemployment, high prices, in general few opportunities for development.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995996356010437, "perplexity": 3372.7843268463694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00487.warc.gz"}
https://www.physicsforums.com/threads/convolution-of-a-dirac-delta-function.222873/
# Convolution of a dirac delta function 1. Mar 19, 2008 ### pka Alright...so I've got a question about the convolution of a dirac delta function (or unit step). So, I know what my final answer is supposed to be but I cannot understand how to solve the last portion of it which involves the convolution of a dirac/unit step function. It looks like this: 10 * Inverse laplace of [ H(s) * e ^ (-5s) ] where H(s) = (1/20) * (1 - e ^ -20t). ---Note: This is what I've done to lead me to the dirac/unit step. Btw, I'm calling it the dirac/unit step function because I get the dirac delta function in my answer whereas the answer has a unit step function. So, just for clarity's sake I will call the unit step function u(t - a) and the dirac delta function d(t - a). Now, let's continue. Saying L(s) = e ^ -5s. So that its inverse laplace, l(t) = d(t - 5). Let's also say that M(s) = H(s) * L(s). Convolution time!!! And I get m(t) = (1/20) * (1 - e ^ -20t) * Integral from 0 to t of d(tau - 5) d(tau). I'm sorry about my notations, I don't know how to put in an integral sign or...any other fancies. =/ This is where my trouble starts. I thought the integral of a dirac delta function would be just h(t). But that's not right. In order for my answer to make any sense then the integral of d(tau - 5) should be just d(t - 5) to get a fairly simple answer of d(t - 5) * h(t). Any help in this matter would be greatly appreciated. Links too! If I've posed a really simple question in too much writing then feel free to let me know or if I'm thinking about this way too hard then please...also let me know. But many thanks to any advice or help anyone can offer me. 2. Mar 19, 2008 ### pka Actually...I think I've solved my problem!!!! In integral of the dirac delta should be just the unit step function....giving me what I need. And so...the convolution turns out to be m(t) = h(t) * u(t - a). So...then it's just 10 * m(t). :D Can anyone tell me if my answer is correct in its thought and all that? :D Similar Discussions: Convolution of a dirac delta function
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924045741558075, "perplexity": 592.3129552650915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685993.12/warc/CC-MAIN-20170919183419-20170919203419-00071.warc.gz"}
https://link.springer.com/article/10.1007%2Fs11661-011-1045-9
Metallurgical and Materials Transactions A , Volume 43, Issue 6, pp 1845–1860 # Prediction of Inhomogeneous Distribution of Microalloy Precipitates in Continuous-Cast High-Strength, Low-Alloy Steel Slab • Suparna Roy • Sudipta Patra • S. Neogy • A. Laik • S. K. Choudhary • Debalay Chakrabarti Article DOI: 10.1007/s11661-011-1045-9 Roy, S., Patra, S., Neogy, S. et al. Metall and Mat Trans A (2012) 43: 1845. doi:10.1007/s11661-011-1045-9 ## Abstract Spatial distribution in size and frequency of microalloy precipitates have been characterized in two continuous-cast high-strength, low-alloy steel slabs, one containing Nb, Ti, and V and the other containing only Ti. Microsegregation during casting resulted in an inhomogeneous distribution of Nb and Ti precipitates in as-cast slabs. A model has been proposed in this study based on the detailed characterization of cast microalloy precipitates for predicting the spatial distribution in size and volume fraction of precipitates. The present model considers different models, which have been proposed earlier. Microsegregation during solidification has been predicted from the model proposed by Clyne and Kurz. Homogenization of alloying elements during cooling of the cast slab has been predicted following the approach suggested by Kurz and Fisher. Thermo-Calc software predicted the thermodynamic stability and volume fraction of microalloy precipitates at interdendritic and dendritic regions. Finally, classical nucleation and growth theory of precipitation have been used to predict the size distribution of microalloy precipitates at the aforementioned regions. The accurate prediction and control over the precipitate size and fractions may help in avoiding the hot-cracking problem during casting and selecting the processing parameters for reheating and rolling of the slabs. ## 1 Introduction Carbide, nitride, or carbonitride precipitates formed by the microalloying elements such as, Nb, Ti, and V provide grain refinement and precipitation strengthening in high-strength, low-alloy (HSLA) steel.[1] Microalloy precipitates in continuous-cast slab can influence the microstructural changes taking place during subsequent processing such as reheating and rolling and, hence, need to be studied. Industrial reheating of HSLA steel is aimed at dissolving the Nb precipitates to encourage the fine-scale, strain-induced Nb(C,N) precipitation during hot rolling.[1,2] Pinning of austenite (γ) grain boundaries by the microalloy precipitates also prevents the excessive γ grain growth during soaking.[1,3] The choice of reheating temperature and time, therefore, should be based on the characterization of as-cast precipitates.[1,3,4] The nature, shape, and size of the microalloy precipitates have been widely investigated in as-cast slab as well as in thermomechanically controlled rolled (TMCR) HSLA steel plates/strips.[4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] Both macro- and microsegregation during casting may result in an inhomogeneous distribution of the precipitates.[7, 8, 9, 10, 11,16, 17, 18, 19, 20] The formation of large (>1 μm) Nb-rich dendritic precipitates[7,8,16,17] or eutectic (Nb,Ti,V)(C,N) particles[11] in the interdendritic boundaries indicated the segregation of microalloying elements, especially Nb. A higher volume fraction of Nb precipitates in pearlitic regions, which coincided with the interdendritic regions, compared with the ferritic regions, which coincided with the dendrite-center regions of TMCR steels containing 0.023 to 0.057 wt pct Nb, also has been attributed to interdendritic segregation.[18] Clustering of coarse microalloy precipitates (such as dendritic (Nb,Ti)(C,N) and TiN)) in the interdendritic region of as-cast slab can lead to slab-surface cracking during continuous casting.[9,21,22] Hence, prediction and control over precipitate size and spatial distribution of precipitates is crucial for maintaining the cast-slab quality. The effect of segregation on the stability of microalloy precipitates and on the precipitate size distributions at different regions (solute-rich and solute-depleted) of as-cast slab is not well understood. Hence, a model has been proposed here, based on the detailed characterization of cast microalloy precipitates, for predicting the spatial distribution in size and volume fraction of precipitates. The present model considers different models, which have been proposed earlier. Microsegregation during solidification has been predicted from the model proposed by Clyne and Kurz.[23] Homogenization of alloying elements during cooling of the cast slab has been predicted following the approach suggested by Kurz and Fisher.[24] Thermo-Calc software predicted the thermodynamic stability and volume fraction of microalloy precipitates at interdendritic and dendritic regions. Finally, classical nucleation and growth theory of precipitation have been used to predict the size distribution of microalloy precipitates at the aforementioned regions. The predictions were verified by the measurement of the local composition and characterization of precipitates from interdendritic and dendritic regions of the as-cast slabs. ## 2 Experimental Details Two as–continuously cast (200 -mm thick and 1200 -mm wide) low-carbon microalloyed steel slabs have been investigated. The chemical compositions of the two slabs are given in Table I. Table I Chemical Compositions of the Investigated Slabs Wt Pct C Si Mn P S Al Nb Ti V Slab 1 0.09 0.33 1.42 0.010 0.003 0.035 0.050 0.019 0.05 Slab 2 0.07 0.18 1.20 0.012 0.005 0.034 0.041 Slab 1 contained microalloying elements Nb, Ti, and V, whereas Slab 2 did not contain any Nb and V, but the concentration of Ti was twice that of Slab 1. As the investigated slabs were continuously cast commercial grades, the detailed time–temperature data during solidification was not available. Through thickness slices (200 mm × 100 mm × 25 mm) were cut from the midwidth location of the continuously cast slabs. All microstructural specimens were collected from the top half of the slabs, at subsurface (SS, 0 to 20 mm from top surface), quarter-thickness (QT 40 to 60 mm from the top surface), and midthickness (MT, 90 to 110 mm from top surface) locations. Standard techniques have been followed for metallographic sample preparations. The microstructural characterization in terms of secondary dendritic arm spacing (SDAS); second-phase fraction; and shape, size, and distribution of coarse- and fine-microalloy precipitates have been carried out using a LEICA DM6000M optical microscope, fitted with Leica M.W. and Leica L.A.S. image analysis software (Leica Microsystems Gmbh, Wettzlar, Germany), as well as using Zeiss EVO 60 (Carl Zeiss MicroImaging, LLC, Thornwood, NY) and JEOL1 7300 model scanning electron microscopes (SEMs), fitted with Oxford-Inca PENTA FETX3 software (Oxford Instrument PLC, Abingdon, Oxfordshire, United Kingdom) for energy-dispersive X-ray spectroscope (EDS). At least 100 precipitates have been studied from each microstructural region (interdendritic/dendrite center) of every sample for calculating the average precipitate size. Fine precipitates (>100 nm) were characterized under JEOL 2000FX and JEOL ZEM-2100* model transmission electron microscopes (TEMs). Local compositions from interdendritic and in dendrite center regions at QT and MT locations of as-cast slabs have been detected by an electron probe microanalyzer (EPMA), equipped with three wavelength dispersive spectrometers (Cameca SX 100, CAMECA, Société par Actions Simplifiée (SAS), Gennevilliers Cedex, France). ## 3 Microstructure and Precipitates in As-Cast Slabs ### 3.1 Microstructure of As-Cast Slabs The microstructures from SS, QT, and MT locations of as–continuously cast slabs consisted of ferrite and pearlite (~15 to 20 pct), Figure 1. Ferrite grain sizes and SDAS increased from SS (Figures 1(a) and (c)) to QT (Figures 1(b) and (d) and to MT locations, possibly because of the decrease in slab cooling rate. The number-averaged SDAS values measured until the narrow, equiaxed zone (~25-mm thick) at the slab centerline in both slabs are given in Figure 2. The mean ferrite grain sizes (measured in equivalent circle diameter) were 20 to 25 μm at SS, 35 to 40 μm at QT, and 55 to 60 μm at MT. ### 3.2 Precipitates in the As-Cast Slab 1 QT location (~50 -mm below the top surface) was selected for detail precipitate quantification and detection of local compositions, as previous studies[4,20] reported a consistent segregation profile at that location. An inhomogeneous distribution of precipitates was found on the polished surface of both investigated slabs, with precipitate-rich regions (circled in Figure 3(a)) surrounded by regions of low precipitate density (Figure 3(a)). The bright precipitates in Figure 3(a) are magnified in Figure 3(b), and the corresponding EDS analysis (Figure 3(c)) revealed them to be Nb-rich carbonitrides (either Nb(C,N) or (Nb,Ti)(C,N)). Darker constituents in Figure 3(a) were either MnS inclusions or cuboidal TiN particles (Figure 3(d)). The brighter and darker appearances of the precipitates (or inclusions) in compositional contract of back-scattered electron images (Figures 3(a) and (b) are caused by their higher or lower (average) atomic numbers, respectively, compared with the Fe matrix. A fraction of Nb-rich precipitates, MnS and TiN, were much higher in interdendritic regions (on or around the pearlite and bainite) compared with the dendrite center (ferrite) regions. The separation between subsequent precipitate-rich regions (center-to-center distance of 140 to 160 μm) at the QT location was consistent with the SDAS values measured at that location (~150 μm). This observation indicates that interdendritic segregation was responsible for the inhomogeneous distribution of the precipitates and inclusions, with precipitate-rich regions being the interdendritic regions and precipitate-lean regions being the dendrite center regions The shape and size of various microalloy precipitates observed in Slab 1 were as follows: (1) cuboidal TiN particles (700 to 1800 nm), (2) star- or cruciform-shaped (winged) precipitates (Nb,Ti)(C,N) (40 to 1300 nm), (3) cuboidal-shaped (Nb,Ti)(C,N) (30 to 700 nm), and (4) spherical NbC and VC (3 to 50 nm). The frequency of spherical precipitates (~75 pct) was much higher compared with cuboidal (~10 pct) and star/cruciform precipitates (~15 pct). Star/cruciform-shaped particles were predominantly observed in interdendritic regions, which can be attributed to the microseregation-induced precipitation.[4,7,16, 17, 18, 19, 20] Grouping/clustering of Nb-rich precipitates is evident in Figures 4(a) and (b). Nb precipitates were also observed in rows, Figures 3(a) and 4(c)), which can be attributed to the rejection of solute atoms from the interdendritic melts, as the liquid melt front advanced during solidification.[16] An EDS line scan confirmed that the distances between such “interdendritic precipitate bands” were consistent with the SDAS. Selected area diffraction (SAD) analysis was also carried out in TEM to identify the nature of the microalloy precipitates. For example, Figure 4(d) shows the SAD pattern for VC precipitates. Spherical VC and (Nb,V)C precipitates (<30 nm), however, were uniformly distributed throughout the ferrite matrix (Figure 4(d)). ### 3.3 Precipitates in As-Cast Slab 2 Similar to Slab 1, TiN particles in Slab 2 were present at a higher density in interdendritic regions (Figure 5(a)) compared with dendrite center regions (Figure 5(b)). The wide variation in cuboidal TiN particle sizes (30 nm to 7 μm) can be reflected by the presence of large and small particles (Figures 5(a) through (d)). A Ti peak is visible in the EDS analysis collected from the large TiN particle in Figure 5(a). A dark-field (TEM) image showing a TiN particle and the corresponding SAD analysis is shown in Figure 5(d). The inhomogeneous distribution of TiN can be attributed to the microsegregation of Ti and N.[7,8,17] AlN particles have not been found in either slab possibly because of the presence of sufficient Ti for combining with all the N present in steels. ### 3.4 Complex Precipitates and Microalloy Segregation at MT The heterogeneous precipitation of (Nb,Ti)(C,N) in Slab 1 on the MnS inclusion (Figure 6(a)) and TiN in Slab 2 on the Al2O3 inclusion (Figure 6(b)) can be outcomes of microsegregation.[17] Such complex particles may hamper the mechanical properties (such as ductility and low-temperature impact toughness) of the slabs.[17,21,22] Microalloy segregates as large as 10 to 15 μm of Nb-rich (Nb,Ti)(C,N) (Figures 7(a) and (b)) and Ti-rich (Nb,Ti)(C,N) (Figures 7(c) and (d)) were found at the MT location of Slab 1. The segregation of Nb-rich (Nb,Ti)(C,N) was associated with MnS inclusions (Figure 7(a)). The formation of large, eutectic (Nb,Ti)(C,N) has been reported earlier in HSLA steel,[11] which can also be attributed to the macrosegregation of Nb and Ti. However, the presence of Ti-rich (Nb,Ti)(C,N) besides Nb-rich segregates has not been reported earlier. Such constituents may also hamper the mechanical properties at the centerline of the as-cast slab. ### 3.5 Fraction of Microalloy Precipitates in the As-Cast Slabs The number-density of microalloy precipitates was ~2 to 3 times higher at interdendritic (i.e., precipitate-rich) regions compared with the dendrite center (i.e., precipitate-poor) regions in each slab (Figure 8), indicating the microsegregation-induced inhomogeneous precipitate distribution. A greater Ti content in Slab 2 possibly resulted in larger TiN particles in Slab 2 (up to ~7 μm), than that in Slab 1 (up to ~1.8 μm). The number densities (number/mm2) and average sizes (nm) of microalloy precipitates measured at SS, QT, and MT locations of each slab (Table II) indicate that precipitate densities and sizes increased from the SS toward the MT possibly because of the following reasons: (1) the increase in solute level caused by macrosegregation and (2) the slower cooling rate toward the slab center, allowing for more time for precipitate growth. Table II Number Density (per mm2) and Average Size (nm) of Microalloy Precipitates Measured in Solute-Rich and Solute-Poor Regions at SS, QT, and MT Locations of Slab 1 and Slab 2 Number Density per mm2 (×106) Average Size ( nm) Steel 1 Steel 2 Steel 1 Steel 2 SS solute-rich 12 0.6 35 70 solute-poor 4 0.4 20 50 average 7 0.5 25 60 QT solute-rich 15 1.0 50 88 solute-poor 6 0.5 30 45 average 10 0.7 40 60 MT solute-rich 18 2 70 105 solute-poor 8 0.4 30 45 average 13 0.9 45 70 The higher precipitate density in Slab 1 compared with Slab 2 (Figure 8, Table II) can be attributed to the presence of Nb and V, which formed numerous, fine NbC, VC, and (Nb,V)C precipitates in Slab 1. ### 3.6 Measurement of Local Compositions in Interdendritic and Dendrite Center Regions The concentration of alloying elements (in wt pct) was measured by microanalysis of the interdendritic and dendrite center regions at the QT location of Slabs 1 and 2 using EPMA (Table III). Table III Concentrations of Various Elements (in wt pct) Obtained by Microanalysis of the Interdendritic and Dendrite Center Regions at Various Locations of Slab 1 and Slab 2 Using EPMA Elements C Si Mn P S Al Nb Ti V N Measured at QT of Slab 1 Interdendritic 0.11 0.40 1.6 0.020 0.010 0.04 0.08 0.040 0.055 0.010 Dendrite center 0.07 0.30 1.2 0.005 0.001 0.04 0.02 0.010 0.045 0.006 Measured at QT of Slab 2 Interdendritic 0.11 0.30 1.4 0.020 0.010 0.03 0.06 0.11 Dendrite center 0.08 0.20 1.1 0.004 0.001 0.03 0.03 0.005 Wave-length dispersive X-ray spectroscope (WDS) was preferred in this case because of its high accuracy, especially for low atomic number elements. Table III clearly suggests that interdendritic regions were solute-rich and dendrite-center regions were solute-depleted. Elements such as Nb, Ti, P, S, and Mn were clearly partitioned between the aforementioned regions. C and N levels were also higher in interdendritic regions than those in the dendrite center regions (Table III). Elements such as Si, V, and Al were distributed homogeneously throughout the matrix. ## 4 Theoretical Analysis ### 4.1 Dependence of Microsegregation on the Solidification Sequence According to the Thermo-Calc software, solidification in Slabs 1 and 2 is expected to start at around the same temperature (~1798 K [1425 °C] to 1793 K [1520 °C]) with the formation of δ ferrite (Figure 9(a)). Austenite starts to form at around 1758 K (1485 °C) (Figure 9(a)). Complete solidification is predicted at 1730 K (1457 °C) in Slab 1 and at 1718 K (1445 °C) in Slab 2. Therefore, the freezing range of both slabs was similar; however, a slightly greater freezing range in Slab 2 (possibly because of its lower C level) might have promoted dendrite coarsening, which resulted in a slightly higher SDAS in Slab 2 than in Slab 1 (Figure 2).[25] During thick-slab continuous casting, the metal in contact with the water-cooled copper mold (i.e., at the SS region) solidifies as the solute depleted δ ferrite (i.e., at the SS region). Solidification at the SS region will generally be completed as δ ferrite because of the increased cooling rate resulting in nonequilibrium solidification.[18] Considering the subsequent δγ transformation and decomposition of γ into ferrite and pearlite, a greater microalloy precipitate size and number density of precipitates is expected in and around pearlite (or bainite) compared with the ferrite grain center regions,[18] as found experimentally. At a greater depth (determined by slab composition and cooling rate) below the slab surface, the solidification sequence may change to mixed δ/γ, which will result in different segregation behavior. The change in solidification sequence and associated spatial distribution of the microalloy precipitates have been discussed earlier in detail.[18] ### 4.2 Microsegregation Models Partitioning of various alloying elements between liquid and solid phases during equilibrium solidification can also be calculated from Thermo-Calc software (Thermo-Calc Software, Stockholm, Sweden). Thermo-Calc uses the following Scheil–Gulliver model,[26] which is the simplest expression for calculating the solute redistribution in liquid CL and in solid CS, considering the nominal composition of the steel C0, and the weight fraction of solid fS in the solidifying volume: $$C_{\text{L}} = C_{0} \left( {1 - f_{\text{S}} } \right)^{{k_{p} - 1}} \,{\text{where}},k_{p} = \frac{{C_{\text{S}} }}{{C_{\text{L}} }}$$ (1) The equilibrium partition ratio (kp) of various alloying elements in steel are listed in Table IV,[11,27,28] for a different solidification route (LL + δ and LL + γ). Table IV Partition Coefficients of Solutes During Solidification in Delta-Ferrite $$\left( {k_{p}^{{\delta{/}{\text{L}}}} } \right)$$ Route and in Austenite Route $$\left( {k_{p}^{{\gamma{/}{\text{L}}}} } \right)$$ Diffusivity of Solute Elements in δ Ferrite $$\left( {D_{s}^{\delta } } \right)$$ and in Austenite $$\left( {D_{s}^{\gamma } } \right)$$[11,25,29] Element $$k_{p}^{{\delta{/}{\text{L}}}}$$ $$k_{p}^{{\gamma{/}{\text{L}}}}$$ $$D_{s}^{\delta }$$ × 104 (m2/s) $$D_{s}^{\gamma }$$ × 104 (m2/s) C 0.19 0.34 0.0127exp(–81,379/RT) 0.15exp(–143,511/RT) Si 0.77 0.52 8.0exp(–248,948/RT) 0.30exp(–251,458/RT) Mn 0.77 0.79 0.76exp(–224,430/RT) 0.055exp(–249,366/RT) Al 0.60 0.60 5.9exp(–96,441/RT) 5.9exp(–241,417/RT) P 0.23 0.13 2.9exp(–230,120/RT) 0.01exp(–182,841/RT) S 0.05 0.035 4.56exp(–214,639/RT) 2.4exp(–223,426/RT) Ti 0.38 0.33 3.15exp(–247,693/RT) 0.15exp(–250,956/RT) V 0.93 0.63 4.8exp(–239,994/RT) 0.284exp(–250,956/RT) Nb 0.40 0.22 50exp(–251,960/RT) 0.83exp(–266,479/RT) N 0.25 0.48 1.57exp(–243,509/RT) 0.91exp(–168,490/RT) O 0.03 0.03 0.0371exp(–96,441/RT) 5.75exp(–168,615/RT) The prediction of solute partitioning indicates that the Nb level in the last solidifying liquid (CL) can reach ~6.0 times of the average Nb level (C0) in steel (Figure 9(b)). Ti, C, and N showed CL/C0 of ~3.3, ~5.0, and ~2.6, respectively. S showed the strongest partitioning with CL/C0 of ~20, whereas elements such as V and Al showed negligible partitioning during solidification (CL/C0 of ~1 to 1.5). This finding can explain the inhomogeneous distribution of MnS inclusions as well as Nb and Ti precipitates in the investigated slabs and the nearly homogeneous distribution of the V precipitates. However, the partitioning of alloying elements in the measured concentrations in Table III (Nb level in interdendritic region: average Nb level ~1.6) is smaller than that predicted by Thermo-Calc. To better predict solute partitioning during solidification, compared with the lever rule and Scheil equation, Brody and Flemings[30] proposed the following equation: $$C_{{{\text{L}},i}} = C_{0,i} \left[ {1 - \left( {1 - 2\alpha k_{p} } \right)f_{s} } \right]^{{\frac{{\left( {k_{p} - 1} \right)}}{{\left( {1 - 2\alpha k_{p} } \right)}}}}$$ (2) where CL,i is the liquid concentration of a given solute (e.g., i) at the solid–liquid interface, C0,i is the initial liquid concentration, kp is the equilibrium partition coefficient of solute i, and fs is the solid fraction. The equilibrium partition ratio (kp) of various alloying elements in steel are listed in Table IV,[11,27,28] for a different solidification route (L → L + δ and L → L + γ). The back-diffusion coefficient α is defined as follows: $$\alpha = \frac{{D_{\text{S}} t_{f} }}{{(0.5\lambda {}_{\text{S}})^{2} }}$$ (3) where DS is the diffusion coefficient of solute in the solid phase (either δ ferrite or γ) in cm2s−1 (Table IV), λS is the SDAS in cm, and tf is the local solidification time (seconds), which is expressed as follows: $$t_{f} = \frac{{T_{\text{L}} - T_{\text{S}} }}{{C_{\text{R}} }}$$ (4) where TL and TS are the liquidus and solidus temperatures of the steel (predicted using Thermo-Calc software) and CR is the average cooling rate during solidification, which can be obtained from the measured SDAS (λS) at any location of the slab using the following expression[31]: $$\lambda_{\text{S}} = \left( {169.1 - 720.9C_{{0,{\text{C}}}} } \right)C_{\text{R}}^{ - 0.4935}$$ (5) where C0,C is the nominal C content of the steel (for C < 0.15 wt pct). Clyne and Kurz[23] replaced the back-diffusion coefficient (α) in Eq. [2] with the term Ω, which is defined as follows: $$\Upomega = \alpha \left[ {1 - \exp \left( { - \frac{1}{\alpha }} \right)} \right] - \frac{1}{2}\exp \left( { - \frac{1}{2\alpha }} \right)$$ (6) The Clyne and Kurz[23] model is suitable for predicting the microsegregation in low-C steels.[29,30] Using the measured SDAS values (Figure 2), the CR and tf values can be calculated from Eqs. [4] and [5] for the following locations of the slabs: SS (i.e., 10 mm from the top surface): CR ~4 K/s and tf ~ 12 seconds; QT: CR ~1 K/s and tf ~50 seconds; and MT: CR ~0.2 K/s and tf ~250 seconds. The partitioning of microalloying elements (Nb, Ti, and V) predicted from the Clyne and Kurz model[23] at SS and MT locations (Figure 10) show that microsegregation becomes severe with an increase in depth below the SS. Following the previous studies,[21,32] the composition in solid corresponding to solid fraction, fs ~0.05, and the composition in the liquid corresponding to fs ~0.95, are assumed to be the compositions at the middle of solute-depleted (dendrite center) regions and solute-rich (interdendritic) regions, respectively. The concentration of alloying elements predicted from the Clyne and Kurz model[23] at interdendritic and dendrite center regions at the QT location in the slabs is listed in Table V. Table V Concentration of Alloying Elements in Interdendritic and Dendrite Center Regions at QT Location of Slabs 1 and 2 Predicted by Clyne and Kurz Model[23] at the End of Solidification* Predicted from Clyne–Kurz model[31] for QT location of Slab 1 Interdendritic 0.30 0.43 1.81 0.042 0.050 0.055 0.012 0.050 0.054 0.020 Dendrite center 0.02 0.25 1.08 0.002 0.001 0.020 0.020 0.008 0.047 0.007 Predicted from Clyne–Kurz model[31] for QT location of Slab 2 Interdendritic 0.350 0.233 1.55 0.051 0.080 0.05 — 0.100 — 0.023 Dendrite center 0.014 0.140 0.94 0.003 0.001 0.02 0.016 0.002 Predicted for Slab 1 considering homogenization during cooling[32] Interdendritic 0.090 0.38 1.64 0.025 0.020 0.040 0.090 0.040 0.050 0.007 Dendrite center 0.090 0.30 1.20 0.007 0.002 0.030 0.033 0.010 0.050 0.007 Predicted for Slab 2 considering homogenization during cooling[32] Interdendritic 0.07 0.20 1.38 0.020 0.030 0.04 — 0.070 — 0.007 Dendrite Center 0.07 0.16 1.12 0.007 0.002 0.04 0.023 0.007 *Concentrations have also been predicted at those regions, considering the homogenization[24] of as-cast slabs during cooling down to the ambient temperature The difference between predicted and measured concentrations (Tables III and V) can be caused by the fact that the Clyne and Kurz model[23] predicts the solute partitioning during solidification without considering the homogenization taking place during the subsequent cooling of the slabs from solidus temperature to ambient temperature. ### 4.3 Homogenization During Cooling of As-Cast Slab The change in the concentration profile resulting from microsegregation, during any homogenization treatment can be represented by the one-dimensional, time-dependent diffusion equation, and its likely solution can be expressed as follows[24]: $$C(x,t) = C_{0} + \Updelta C\cos \left( {\frac{\pi x}{{\lambda_{s} }}} \right)\exp \left( { - \frac{t}{\tau }} \right)$$ (7) where C(x,t) is the solute concentration at any point corresponding to the interdendritic or dendrite center regions after homogenization for time t at temperature T. C0 is the nominal composition of the steel, ΔC is the amplitude of the initial concentration profile, which is approximated as a cosine function,[24] λs is the secondary dendritic arm spacing, x is the distance along the direction perpendicular to the secondary dendritic arms, and τ is the relaxation time, which can be expressed as follows: $$\tau = \frac{{\lambda_{s}^{2} }}{{\pi^{2} D_{s} }}$$ (8) Starting with the predicted compositions at the end of solidification obtained from the Clyne and Kurz model,[23] in the middle of solute-rich (interdendritic) and solute-depleted (dendrite center) regions and assuming that the concentration profile follows a cosine function, the change in concentration at those regions during cooling has been calculated using Eq. [7]. Equations [7] and [8] are applicable to the isothermal holding condition, and the Additivity rule[33] has been used for continuous cooling. Following the predicted solidification sequence (Figure 9(a)) as the solidification reaches the completion (i.e., fs > 0.95), δ ferrite dominates the microstructure over a 5 to 10 K temperature range before δ transforms to γ. Because of the higher diffusivity of solutes in δ ferrite compared with γ (Table IV), substitutional solutes (such as Nb and Ti) are predicted to homogenize partly in δ ferrite, whereas negligible homogenization takes place within γ (Figure 11(a)). Because of the high diffusivity, interstitial elements, such as C and N, are expected to homogenize almost completely within the δ phase field (Figure 11(b)). The predicted concentration of alloying elements at the interdendritic and dendrite center regions at QT of both slabs, after the slabs cool to the ambient temperature (Table V), is close to the experimentally measured values listed in Table III. Solid-state homogenization was negligible during the solidification in austenitic route (L → L + γ), which might have occurred at the slab-center location,[18] resulting is strong segregation that formed large microalloy deposits (Figure 7). ### 4.4 Thermo-Calc Prediction of Precipitate Volume Fraction To predict the precipitate volume fraction separately in the interdendritic and dendrite center regions of the microsegregated slabs, the concentration of alloying elements at those regions, calculated from the Clyne and Kurz model[23] (Table V) were fed into the Thermo-Calc software. Precipitates are expected to form at higher temperatures and at larger mole fractions in the solute-rich (interdendritic) regions compared with those in the solute-depleted (dendrite center) regions (Figure 12). TiN particles are predicted to form initially in the interdendritic liquid during solidification followed by their precipitation in the solid state (Figures 12(a) and (b)). The Thermo-Calc prediction of the internal composition of precipitates suggests that TiN formed predominantly at a higher temperature and converted to Ti(C,N) and then to TiC with the decrease in temperature. Nb precipitates were mainly carbides, which contained some Ti at higher temperatures. V precipitates formed at lower temperatures in γ, as well as in α, were mainly VC. Ti combined with almost all of the N present in Slab 1 formed TiN, which resulted in the subsequent precipitation of fine microalloy carbides (NbC and VC). VC precipitation is expected to be least affected by the microsegregation (Figure 12(a)), which agrees with the experimental observation. Similarly, in Thermo-Calc, the predicted compositions of interdendritic liquid near the MT location indicated that large amount of Ti- and Nb-rich particles are anticipated to form, which may explain the formation of large microalloy segregates (several micrometers in size, Figure 7). MnS inclusions were also predicted to form in the interdendritic melt, toward the end of solidification, and were expected to show inhomogeneous distribution. Using the density and molar volume of the precipitates (and Fe matrix) as listed in Table VI,[16] the mole fractions of the precipitates predicted in Figure 12 have been converted to the corresponding volume fractions, which were close to those measured by image analysis (Figure 13). Table VI Structural Details and Solubility Products of Several Precipitates (and Solid-Phases) Observed in the Investigated Steels* Precipitate (crystal structure) Lattice Parameter (nm) Density (gm/cm3) Molar Volume (cm3/mol) Solubility Product log10[M][X] = A – B/T* TiN (fcc) 0.4233 5.42 11.44 6.40 – 17,040/T (L) 0.322 – 8,000/T (γ) TiC (fcc) 0.4313 4.89 12.27 2.75 – 7,000/T (γ) 4.4 – 9,575/T (α) NbN (fcc) 0.4387 8.41 12.72 4.04 – 10,230/T (γ) NbC (fcc) 0.4462 7.84 13.39 2.96 – 7,510/T (γ) 5.43 – 10,960/T (α) Nb(C,N) (fcc) 0.4445 8.10 12.80 log[Nb + 12/14N] = 2.06 – 6,700/T (γ) VN (fcc) 0.4118 6.18 10.52 3.02 – 7,840/T (γ) VC (fcc) 0.4154 5.83 10.81 6.72 – 9,500/T (γ) 8.05 – 12,265/T (α) Solid phases γ-Fe (fcc) 0.357 8.15 6.85 α-Fe (bcc) 0.286 7.85 7.11 *M = microalloying elements (Nb/Ti/V); X = interstitial solutes (C/N)[1,6,34, 35, 36] fcc = face-centered cubic bcc = base-centered cubic ### 4.5 Prediction of Precipitate Size Distribution The sizes of oxide and sulfide inclusions and TiN particles have been predicted in earlier studies considering the microsegregation of alloying elements during solidification.[37, 38, 39] The nucleation rate and growth-rate should remain constant in a homogeneously supersaturated metal, which is not the case for microsegregation. Hence, the nucleation and growth model have to be coupled with the microsegregation model to predict the size distribution of different precipitates in the interdendritic and dendrite center regions. #### 4.5.1 Supersaturation and nucleation of microalloy precipitates The time-dependent homogeneous nucleation of spherical particles can be expressed as follows[40, 41, 42, 43]: $$I = N_{\text{V}} Z\beta^{*} \exp \left( { - \frac{{\Updelta G^{*} }}{kT}} \right)\exp \left( { - \frac{\tau }{t}} \right)$$ (9) where NV is the number of nucleation sites per unit volume and ΔG* is the energy required to form a nucleus of critical size (r*) and kT (k is the Boltzmann constant and T is the absolute temperature in K) and t represents time. Expressions for calculating the Zeldovich factor Z, frequency factor β*, and incubation time τ, are given in References 40, 41, 42, 43. Ignoring the strain energy, the critical nucleus size radius r* can be expressed as follows: $$r^{*} = - \frac{2.\sigma }{{\Updelta G_{v} }}$$ (10) where σ is the interfacial energy of the nucleus and ΔGv is the volume free energy change during nucleation. Previous studies[32,38,39] discussed in detail the modification of Eq. [9] for predicting the TiN precipitation in liquid steel. ΔGv can be obtained from the supersaturation ratio η using the following equation: $$\eta = \frac{{[{\text{wt \,pct \,Ti}}][{\text{wt \,pct \,N}}]}}{{L_{\text{TiN}} }}$$ (11) where LTiN is the solubility product of TiN in liquid iron as given in Table VI. Interfacial energy σ ~0.8 J/m2 can be used for TiN precipitation in the liquid.[32,39,40] From the previous equations, it is evident that a different level of supersaturation—resulting from microsegregation—in different regions of solidifying and solidified steel can result in different chemical driving forces (ΔGv) for precipitation between those regions. In the interdendritic region, higher ΔGv will increase the nucleation rate I and reduce the critical nucleus size (r*). According to the present study, TiN precipitation in the interdendritic liquid starts at η = ~5 to 6, which agrees with previous reports.[32,39,40] A continuous increase in η for TiN in the solute-rich and solute-depleted regions in the QT location of Slab 2, with the decrease in temperature, is shown in Figure 14. The η values have been calculated using the compositions determined by the Clyne and Kurz model[23] for interdendritic (TiN-interdendritic) and dendrite-center (TiN-Dendrite-Center) regions. The influence of [O] and [S] on the interfacial energy σ and, hence, on the nucleation rate I[42] has not been considered here. Nearly complete homogenization of C and N and incomplete homogenization of Ti and Nb during slab cooling (Figure 11) may reduce the local difference in η values between the interdendritic (TiN-Int. Den.-Homogesd.) and the dendrite center (TiN-Den. Cen.-Homogesd.) regions, as shown by the red line in Figure 14(a). The effect of solid-state homogenization on the precipitation has not been considered in the existing precipitation models.[32, 33, 34,37, 38, 39, 40, 41, 42, 43] Using the predicted η values, the critical nucleus size (r*) for TiN precipitation has been calculated (Figure 14(b)). According to the literature,[2,34] the homogeneous nucleation of microalloy precipitates requires r* ≤ 1 nm. The TiN precipitation start temperatures obtained from r* measurement (~1770 K [~1497 °C] in the interdendritic region and ~1720 K [~1447 °C] in the dendrite center regions), therefore, closely match those obtained from the Thermo-Calc prediction (Figure 12(a)). A minimum separation of ~2 to 4 μm between the consecutive precipitates in precipitate “rows” (Figure 3(a), (Nb,Ti)(C,N) precipitates and Figure 6(a), TiN particles) can be explained by the diffusion field surrounding a particular nucleus,[32,39] which reduces the supersaturation and does not allow another nucleation within that field. As the temperature dropped in the γ-phase field, η reached a high value (>100), resulting in nearly homogeneous distribution of the fine TiN particles throughout the microstructure. Similar calculations have been carried out for the solid-state precipitation (in γ) of NbC and VC in Slab 1. The solubility products of microalloy precipitates (Table VI) and their precipitation kinetics during continuous cooling (without deformation) have been collected from published work.[34, 35, 36] Heterogeneous precipitation on dislocations,[39] which may generate during the bending and straightening operation, however, not been considered here. #### 4.5.2 Growth of microalloy precipitates Diffusion-controlled growth of a single (spherical) particle of radius r, over isothermal holding time t can be obtained from the following equation[43]: $$\frac{dr}{dt} = \frac{{D_{s} }}{r\alpha }\left( {\frac{{X_{0} - X_{I} }}{{X_{p} - X_{I} }}} \right)$$ (12) where Ds is the diffusion coefficient of the slowest diffusing solute (such as Nb and Ti), α is the ratio of matrix to precipitate atomic volumes, X0 is the initial concentration (mole fraction) of the solute (i.e., microalloying elements), and Xp is the concentration of solute in the precipitate. XI is the concentration of solute in the matrix at the particle–matrix interface, which can be obtained from the equilibrium concentration of solute (Xe) in the matrix following the Gibbs–Thomson equation.[44] The concentration of microalloying elements at interdendritic and dendrite center regions predicted by Thermo-Calc R have been considered as the initial concentration Xat those regions (Table V). Xp and Xe at any temperature below the precipitate dissolution temperature can be obtained from Thermo-Calc. Following the Additivity rule,[33] Eq. [12] can be used in a continuous-cooling condition, considering the average cooling rate of the slab (at any location). Hence, it is possible to predict the precipitate growth rate and the final precipitate size at different regions of the investigated slabs. The evolution of precipitate size predicted from the proposed model with respect to the precipitation temperature in the interdendritic and in dendrite center regions of the investigated slabs is shown in Figure 15. The precipitates that nucleate at higher temperatures are expected to grow larger in size (Figure 15), as more time is available for diffusional growth and the diffusion is faster at a higher temperature. The temperature scale along the abscissa in Figure 15 can also be represented as the time from the onset of precipitation, considering the average cooling rate of the slabs. Compared with the dendrite center, larger precipitates should always form in interdendritc regions, where the precipitation starts at a higher temperature. Among the microalloy precipitates, TiN is expected to be largest in size, and the maximum TiN particle size predicted to form in interdendritic regions of Slab 1 (~1.2 μm) and Slab 2 (~8.0 μm, Figure 15) are close to the experimentally measured values (Figure 8). Rapid growth of TiN particles above 1753 K (1480 °C) (i.e. before the formation of γ, Figure 15) can be attributed to the higher diffusivity of Ti and N in liquid steel and in δ ferrite. As the diffusivity drops with the decrease in temperature, the precipitates formed at lower temperatures, such as VC in Slab 1 and TiC in Slab 2, could only reach a maximum size of 10 to 20 nm (Figure 15). At precipitation temperatures above 1700 K (1427 °C), the difference in TiN particle sizes formed in interdendritic and dendrite center regions is more than 500 nm, which drops below 20 nm at 1400 K (1127 °C) (Figure 15(a)). This behavior demonstrates the effect of solid-state homogenization on the evolution of precipitate size in a dendritic structure. Nb(C, N) precipitates in Slab 1 are predicted to reach a size of ~350 nm, which is slightly lower than the measured value (600 nm). This finding could be a result of the complex (Nb,Ti)(C,N) precipitation or heterogeneous nucleation of Nb(C, N) on top of preexisting TiN, which have not been considered in this model. #### 4.5.3 Prediction on the effect of microsegregation on precipitate size distribution Combining the nucleation rate and growth rate of the precipitates, the size distribution of the precipitates have been determined for solute-rich (i.e., interdendritic) regions and solute-depleted (i.e., dendrite center) regions at the QT location of as-cast slabs (Figure 16). Higher density and larger sizes of TiN and Nb(C, N) precipitates in the solute-rich regions of the slabs are evident from Figure 16. The predicted distributions closely followed the experimentally measured values. The TiN particles are expected to be the largest of all the microalloy precipitates. A maximum predicted TiN particle size of ~1.6 μm in Slab 1 (Figure 16(a)) and of ~8 μm in Slab 2 are close to the experimentally measured values. The maximum size of Nb(C, N) is predicted to be ~600 nm in Slab 1 (Figure 16(b)), which is smaller than the measured value (~1.5 μm). Heterogeneous precipitation of NbC on top of preexisting TiN may be the cause of the deviation. Precipitates that formed at lower temperatures, such as VC in Slab 1 and Ti(C, N) in Slab 2, could only reach a maximum size of 10 to 20 nm (Figure 16(c)) as verified by the TEM study. Continued improvement of the prediction requires the consideration of factors such as stereological correction factors in precipitate quantification, precipitation kinetics of complex precipitates, the effect of segregation of [S] and [O] on microalloy precipitation, actual cooling curves of the slabs, and the solidification mode at different locations of the slab. In this context, it is necessary to mention that the partition coefficients (kp) and diffusivity (Ds) of any individual alloying element (e.g., i) as used in the present calculations (Table IV) are valid for a binary solution of element (i) and Fe. Similarly, a binary microsegregation model proposed by Clyne and Kurz[23] has been used here for the back-diffusion calculation to predict the segregation level of individual alloying elements. However, in multicomponent systems, as the investigated steels, the presence of other solute elements (e.g., j, k, and l) can influence the partitioning and diffusion of element (i). To avoid complex mathematical calculations, these interaction effects have not been considered here, although it may introduce a certain error in the final prediction. Future studies need to consider this aspect for more accurate prediction. To understand the sensitivity of the prediction on the choice of microsegregation models, the maximum precipitate sizes in Slab 1 have been predicted separately, considering the Scheil equation, Clyne and Kurz[23] back-diffusion model, and Lever rule. Figure 17 shows the different Nb levels predicted by these models in the interdendritic liquid of Slab 1, with the increase in solid fraction. Compared with the other models, the Scheil model predicts a significantly higher Nb level in the last solidifying liquid (0.79 wt pct), which is predicted to form Nb(C,N) precipitates as large as 5 μm in liquid steel. The largest Nb(C,N) precipitate size measured from the experimental study (600 nm) is far less than the predicted value. Similarly, the maximum TiN particle size predicted in interdendritic liquid (25 μm) considering the Scheil model is much higher than the measured size (1.8 μm). The Scheil model, therefore, seriously overpredicts the extent of microsegregation (Table III) and the corresponding precipitate size in the interdendritic regions. The microsegregation levels predicted by the Clyne and Kurz model is between the Scheil model and the Lever rule, and the satisfactory prediction of precipitate sizes from the Clyne and Kurz model is evident in Figure 16. The maximum precipitate sizes predicted in Slab 1 from the Level rule (260 μm for Nb(C,N) and 960 μm for TiN) were not as good as the Clyne and Kurz model, but were certainly better than that of the Scheil model. The prediction of precipitate size, therefore, was dependent on the microsegregation model. The present findings are in line with the observations of Won and Thomas[31] and Choudhary and Ghosh[29] regarding the prediction of microsegregation in low-carbon steels, although, future study will compare different models is greater detail. ## 5 Summary And Concluding Remarks Spatial distribution in size and frequency of microalloy precipitates have been characterized using high-resolution SEM and TEM in two continuous-cast HSLA steel slabs, one containing Nb, Ti, and V and the other containing only Ti. Microsegregation during casting resulted in an inhomogeneous distribution of Nb and Ti precipitates in as-cast slabs, and precipitate-rich regions were separated by a distance similar to the SDAS. Large networks (several microns in size) of Nb- and Ti-rich phases were found at the segregated regions in the MT location, indicating the strong microalloy segregation during solidification. Such segregation can reduce the effective microalloy level of the steel required for fine-scale precipitation during and after rolling for grain refinement and precipitation strengthening. Considering the microsegregation during solidification, the homogenization of the alloying elements during slab cooling, the thermodynamics of precipitation (using Thermo-Calc software), and the kinetics of precipitation (calculating the nucleation and growth-rate of precipitates), a model has been proposed here for predicting the precipitate size distribution and the amount of precipitates in the interdendritic and dendrite center regions in the segregated slabs. A comparison of the predicted results and the experimental data for precipitate characterization showed satisfactory prediction. The accurate prediction and control over the precipitate size and fractions may help (1) in avoiding the hot-cracking problem and, hence, improve the slab quality, (2) in selecting the soaking time and temperature and predicting the γ grain size during soaking, and (3) in designing the rolling schedule for achieving the maximum benefit from the microalloy precipitates. Footnotes 1 JEOL is a trademark of Japan Electron Optics Ltd., Tokyo. ## Acknowledgments The authors would like to thank the Indian Institute of Technology Kharagpur for the provision of the ISIRD project research grant and the research facilities at the Department of Metallurgical and Materials Engineering Department, Steel Technology Centre and Central Research Facility. They would also like to acknowledge the help provided by Mr. Sukata Mandal in the characterization using SEM and Tata Steel, Jamshedpur for the provided research materials. The authors would also like to sincerely thank Dr. G.K. Dey and Dr. D. Srivastava from the Materials Science Division of Bhabha Atomic Research Centre, Mumbai for their constant support and encouragement in this work. © The Minerals, Metals & Materials Society and ASM International 2012 ## Authors and Affiliations • Suparna Roy • 1 • Sudipta Patra • 1 • S. Neogy • 2 • A. Laik • 2 • S. K. Choudhary • 3 • Debalay Chakrabarti • 1 1. 1.Department of Metallurgical and Materials EngineeringIndian Institute of Technology (I.I.T.)KharagpurIndia 2. 2.Materials Science DivisionBhabha Atomic Research CentreMumbaiIndia 3. 3.Research and Development, Tata SteelJamshedpurIndia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424169421195984, "perplexity": 4814.08477252853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608676.72/warc/CC-MAIN-20170526184113-20170526204113-00349.warc.gz"}
http://lesprobabilitesdedemain.fr/edition2017/programme.html
## Programme de la journée 2017 8h30 -- 9h00 Café d'accueil. 9h00 -- 9h20 Présentation de la journée par les organisateurs. 9h20 -- 10h20 Dmitry Chelkak 2D Ising model: combinatorics, CFT/CLE description at criticality and beyond 10h20 -- 10h40 Pause café. 10h40 -- 11h00 Paul Melotti Récurrence spatiales, modèles associés et leurs formes limites Slides 11h00 -- 11h20 Thomas Budzinski Flips sur les triangulations de la sphere : une borne inférieure pour le temps de mélange Slides 11h20 -- 11h40 Gabriela Ciolek Sharp Bernstein and Hoeffding type inequalities for regenerative Markov chains Slides 11h40 -- 12h00 Simon Coste Trou spectral de matrices de Markov sur des graphes Slides 12h00 -- 12h20 Alkéos Michaïl Perturbations of a large matrix by random matrices Slides 12h40 -- 13h40 Pause déjeuner 13h40 -- 14h00 Léo Miolane Limites fondamentales pour l'estimation de matrices de petit rang Slides 14h00 -- 14h20 Perla El Kettani A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion Slides 14h20 -- 14h40 Julie Fournier Identification and isotropy characterization of deformed random fields through excursion sets Slides 14h40 -- 15h00 Henri Elad Altman Formules de Bismut Elworthy Li pour les processus de Bessel Slides 15h00 -- 15h20 Mohamed Ndaoud Constructing the fractional Brownian motion Slides 15h20 -- 15h40 Marion Sciauveau Cost functionals for large random trees Slides 15h40 -- 16h00 Pause café. 16h00 -- 16h20 Raphael Forien Flux de genes a travers une barriere geographique Slides 16h20 -- 16h40 Veronica Miro Pina Chromosome painting Slides 16h40 -- 17h40 Remco van der Hofstad Hypercube percolation ## Les résumés des exposés de la journée ### 9h20--10h20 Dmitry Chelkak (Russian Academy of Science and ENS) #### 2D Ising model: combinatorics, CFT/CLE description at criticality and beyond We begin this expository talk with a discussion of the combinatorics of the nearest-neighbor Ising model in 2D - an archetypical example of a statistical physics system that admits an order-disorder phase transition - and the underlying fermionic structure, which makes it accessible for the rigorous mathematical analysis. We then survey recent results on convergence of correlation functions at the critical temperature to conformally covariant scaling limits given by Conformal Field Theory, as well as the convergence of interfaces (domain walls) to the relevant Conformal Loop Ensemble. Is the case closed? Not at all: there are still many things to understand and to prove, especially for the non-critical and/or non-homogeneous model. ### 10h40--11h00 Paul Melotti (UPMC) #### Récurrence spatiales, modèles associés et leurs formes limites Certaines relations polynomiales, telles que les relations vérifiées par les mineurs d'une matrice, peuvent être interprétées comme des relations de récurrence sur Z^3. Dans certains cas, les solutions de ces récurrences présentent une propriété inattendue : ce sont des polynômes de Laurent en les conditions initiales. Peut-on donner une interprétation combinatoire de ce fait ? On verra que lorsqu'un objet combinatoire caché derrière ces relations est identifié, il présente des phénomènes de formes limites qui peuvent être calculées explicitement, le plus connu étant le "cercle arctique" des pavages du diamant aztèque. On parlera des récurrences dites de l'octaèdre, du cube, et d'une récurrence due à Kashaev. ### 11h20--11h40 Thomas Budzinski (Universite Paris-Saclay et ENS) #### Flip sur les triangulations de la sphere : une borne inférieure pour les temps de mélange One of the simplest ways to sample a uniform triangulation of the sphere with a fixed number n of faces is a Monte-Carlo method: we start from an arbitrary triangulation and flip repeatedly a uniformly chosen edge, i.e. we delete it and replace it with the other diagonal of the quadrilateral that appears. We will prove a lower bound of order n^{5/4} on the mixing time of this Markov chain. ### 11h20--11h40 Gabriela Ciolek (Telecom ParisTech) #### Sharp Bernstein and Hoeffding type inequalities for regenerative Markov chains The purpose of this talk is to present Bernstein and Hoeffding type functional inequalities for regenerative Markov chains. Furthermore, we generalize these results and show exponential bounds for suprema of empirical processes over a class of functions F which size is controlled by its uniform entropy number. All constants involved in the bounds of the considered inequalities are given in an explicit form which can be advantageous in practical considerations. We present the theory for regenerative Markov chains, however the inequalities are also valid in the Harris recurrent case. ### 11h40--12h00 Simon Coste (Universite Paris-Diderot et Universite Paul Sabatier) #### Trou spectral de matrices de Markov sur des graphes La théorème d’Alon-Friedman dit que la deuxième valeur propre d’un graphe aléatoire d-régulier converge vers 2sqrt(d-1) lorsque la taille du graphe tend vers l’infini. Ce théorème difficile est relié à des propriétés essentielles du graphe G, comme par exemple sa constante d’expansion ou encore la vitesse de convergence de la marche aléatoire simple sur G. Dans cet exposé, nous présenterons ces liens entre la deuxième valeur propre et les propriétés des graphes réguliers puis nous généraliserons ces résultats à des modèles de graphes plus généraux, en particulier des graphes orientés. ### 12h00--12h20 Alkéos Michaïl (Universite Paris Descartes) #### Perturbations of a large matrix by random matrices We provide a perturbative expansion for the empirical spectral distribution of a Hermitian matrix with large size perturbed by a random matrix with small operator norm whose entries in the eigenvector basis of the first one are independent with a variance profile. We prove that, depending on the order of magnitude of the perturbation, several regimes can appear (called perturbative and semi-perturbative regimes): the leading terms of the expansion are either related to free probability theory or to the one-dimensional Gaussian free field. ### 13h40--14h00 Léo Miolane (INRIA et ENS) #### Phase transitions in low-rank matrix estimation. (joint work with Marc Lelarge) We consider the estimation of noisy low-rank matrices. Our goal is to compute the minimal mean square error (MMSE) for this statistical problem. We will observe a phase transition: there exists a critical value of the signal-to-noise ratio above which it is possible to make a non-trivial guess about the signal, whereas this is impossible below this critical value. ### 14h00--14h20 Perla El Kettani (Universite Paris-Sud) #### A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion In this talk, we study a stochastic mass conserved reaction-diffusion equation with a linear or nonlinear diffusion term and an additive noise corresponding to a Q-Brownian motion. We prove the existence and the uniqueness of the weak solution. The proof is based upon the monotonicity method. This is joint work with D.Hilhorst and K.Lee. ### 14h20--14h40 Julie Fournier (Universite Paris-Descartes & UPMC) #### Identification and isotropy characterization of deformed random fields through excursion sets A deterministic application θ : R² → R² deforms bijectively and regularly the plane and allows to build a deformed random field X ◦ θ : R² → R from a regular, stationary and isotropic random field X : R² → R. The deformed field X ◦ θ is in general not isotropic, however we give an explicit characterization of the deformations θ that preserve the isotropy. Further assuming that X is Gaussian, we introduce a weak form of isotropy of the field X ◦ θ, defined by an invariance property of the mean Euler characteristic of some of its excursion sets. Deformed fields satisfying this property are proved to be strictly isotropic. Besides, assuming that the mean Euler characteristic of excursions sets of X ◦ θ over some basic domains is known, we are able to identify θ. Reference: hal-01495157. ### 14h40 -- 15h00 Henri Elad Altman (UPMC) #### Formules de Bismut Elworthy Li pour les processus de Bessel Bessel processes are a one-parameter family of nonnegative diffusion processes with a singular drift. When the parameter (called dimension) is smaller than one, the drift is non-dissipative, and deriving regularity properties for the transition semigroup in such a regime is a very difficult problem in general. In my talk I will show that, nevertheless, the transition semigroups of Bessel processes of dimension between 0 and 1 satisfy a Bismut-Elworthy-Li formula, with the particularity that the martingale term is only in L^{p} for some p > 1, rather than L^{2} as in the dissipative case. As a consequence some interesting strong Feller bounds can be obtained. ### 15h00--15h20 Mohamed Ndaoud (X-CREST) #### Constructing the fractional Brownian motion In this talk, we give a new series expansion to simulate B a fractional Brownian motion based on harmonic analysis of the auto-covariance function. The construction proposed here reveals a link between Karhunen-Loève theorem and harmonic analysis for Gaussian processes with stationarity conditions. We also show some results on the convergence. In our case, the convergence holds in L2 and uniformly, with a rate-optimal decay of the norm of the rest of the series in both senses. ### 15h20--15h40 Marion Sciauveau (Ecole des Ponts) #### Cost functionals for large random trees Les arbres apparaissent naturellement dans de nombreux domaines tels que l'informatique pour le stockage de données ou encore la biologie pour classer des espèces dans des arbres phylogénétiques. Dans cet exposé, nous nous intéresserons aux limites de fonctionnelles additives de grands arbres aléatoires. Nous étudierons le cas des arbres binaires sous le modèle de Catalan (arbres aléatoires choisis uniformément parmi les arbres binaires enracinés complets ordonnés avec un nombre de nœud donné). On obtiendra un principe d'invariance pour ces fonctionnelles ainsi que les fluctuations associées. La preuve repose sur le lien entre les arbres binaires et l'excursion brownienne normalisée. ### 16h00--16h20 Raphael Forien (Ecole Polytechnique) #### Gene Flow across a geographical barrier Consider a species scattered along a linear habitat. Physical obstacles can locally reduce migration and genetic exchanges between different parts of space. Tracing the position of an individual's ancestor(s) back in time allows to compute the expected genetic composition of such a population. These ancestral lineages behave as simple random walks on the integers outside of a bounded set around the origin. We present a continuous real-valued process which is obtained as a scaling limit of these random walks, and we give several other constructions of this process. ### 16h20 -- 16h40 Veronica Miro Pina (UPMC) #### Chromosome painting We consider a simple population genetics model with recombination. We assume that at time 0, all individuals of a haploid population have their unique chromosome painted in a distinct color. At rare birth events, due to recombination (modeled as a single crossing-over), the chromosome of the newborn is a mosaic of its two parental chromosomes. The partitioning process is then defined as the color partition of a sampled chromosome at time t. When t is large, all individuals end up having the same chromosome. I will discuss some results on the partitioning process at stationarity, concerning the number of colours and the description of a typical color cluster. ### 16h40--17h40 Remco van der Hofstad (Technische Universiteit Eindhoven) #### Hypercube percolation Consider bond percolation on the hypercube {0,1}^n at the critical probability p_c defined such that the expected cluster size equals 2^{n/3}, where 2^{n/3} acts as the cube root of the number of vertices of the n-cube. Percolation on the Hamming cube was proposed by Erdös and Spencer (1979), and has proved to be substantially harder than percolation on the complete graph. In this talk, I will describe the percolation phase transition on the hypercube, and show that it shares many features with that on the complete graph. In previous work with Borgs, Chayes, Slade and Spencer, and with Heydenreich, we have identified the subcritical and critical regimes of percolation on the hypercube. In particular, we know that for p=p_c(1+O(2^{-n/3})), the largest connected component has size roughly 2^{2n/3} and that this quantity is non-concentrated. In work with Asaf Nachmias, we identify the supercritical behavior of percolation on the hypercube, by showing that, for any sequence \epsilon_n tending to zero, but \epsilon_n being much larger than 2^{-n/3}, percolation at p_c(1+\epsilon_n) has, with high probability, a unique giant component of size (2+o(1))\epsilon_n 2^n. This also confirms that the validity of the proposed critical value. Finally, we `unlace' the proof by identifying the scaling of component sizes in the supercritical and critical regimes without relying on the percolation lace expansion. The lace expansion is a beautiful technique that is the major technical tool for high-dimensional percolation, but that is also quite involved and can have a disheartening effect on some.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871912956237793, "perplexity": 3939.28936962782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00117.warc.gz"}
https://classes.areteem.org/mod/forum/discuss.php?d=299&parent=642
## Online Course Discussion Forum ### MC2A Help 1 Re: Thanks Hey David. The hints I put in there had the wrong numbers, but were for the problems you were asking (it said 16 instead of 15 and 17 instead of 16, but it is fixed now).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634132385253906, "perplexity": 1588.5881218285408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999964.8/warc/CC-MAIN-20190625233231-20190626015231-00018.warc.gz"}
https://www.physicsforums.com/threads/kinetic-theory.559123/
# Kinetic Theory 1. Dec 11, 2011 ### luigihs Can someone explain me this theory ? and how to use the equation please I have this in my notes but I dont understand :( Average (translational) Kinetic Energy per molecule is E= 3/2kT The same, per mole, is U = 3/2 * R* T 2. Dec 11, 2011 ### sophiecentaur Hi The basics of deriving this involve quite a long string of steps and comes under the heading of 'bookwork'. I think you should just sit down with the book and follow it through. Else you can just accept it. If you don't have 'a book' then Wiki would be a way forward. Start with Boltzman Distribution 3. Dec 11, 2011 ### technician Do you recognise the experimental equation for the gas laws in the form PV = nRT ? where n = number of moles So for 1 mole the experimental law is PV = RT The kinetic theory leads to an expression PV = (N/3) x mc^2 where N is the number of molecules. If this equation is written as 2(N/3) x 0.5mc^2 it makes no difference but it does highlight a combination 0.5mc^2 which is average KE of molecules. Putting the experimental equation and the theoretical equations together leads to RT = 2(N/3) x 0.5mc^2 or 0.5mc^2 = (3/2)TR/N so average KE = (3/2)TR/N R is the gas constant and N is the number of molecules in 1 mole (Avagadros number) The combination R/N of these constants is known as Boltzmanns constant, symbol k Therefore average KE = (3/2)kT Hope this helps Similar Discussions: Kinetic Theory
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457693099975586, "perplexity": 2446.851209166358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816841.86/warc/CC-MAIN-20180225170106-20180225190106-00273.warc.gz"}
http://math.stackexchange.com/questions/390393/finding-the-base-of-a-triangle?answertab=active
# finding the base of a triangle In A triangle ABC ,AB=AC.D is a point inside the triangle such that AD=DC.Median on AC from D meets median on BC from A at the centroid of the triangle.If the area of the triangle ABC equals to $4\sqrt 3$ .Find the base i.e. BC. The method that i have used to solve this problem works by ending with two answers for $\frac12$ of BC and then having to check which one works by trial and error.It is more or less efficient,but I am assuming there's a better one. - If $AD=DC$ ike you said then $\Delta ABC$ is isosceles. Also we have that triangle $\Delta ABC$ is isosceles. A centroid is the intersection of the three medians of a triangle intersecting opposite vertices thus if the median from $AC$ meets median on $BC$ at the centroid, then both medians must intersect their respective opposite vertices. Then we can conclude that the median from $D$ on $AC$ is also a median from $B$ on $AC$. This median forms an angle of $90°$ with side $AC$ since $\Delta ADC$ is isosceles. Now we know that triangle $\Delta ABC$ has two medians that are also perpendicular bisectors, it is easy to see that triangle $\Delta ABC$ is equilateral and $AB = BC = CA.$ $$[\Delta ABC]= \cfrac 12 BC^2\sin 60° = \cfrac{\sqrt 3}{4}BC^2=4\sqrt 3 \implies BC=4$$ - This is my own solution.At first,we know that triangle ABC is isosceles.We also know that tri ADC is also isosceles.So the median of that triangle bisects AC in a right angle. Note that the side opposite angle B is AC,the same as angle D.The median from angle B , like that of angle D will also meet at the centroid.So we can draw the inference that median from angles B and D are actually the same straight lline.But then the median from angle B bisects AC perpendicularly.So AB =BC=AC (since median of an isosceles triangle bisects it perpendicularly.This proves that ABC is an equilateral triangle.Now let the median on BC bisect BC at M.Then ,by congruency criterion,triangles ABM and AMC are equal, and so area of each of them is 2× sqrt 3. We know, $2\times \sqrt{3} = 1/2(2\times 2\times \sqrt{3})$. So MC equals either (\sqrt{2} )\times 3 or simply 2.Now let us use the pythagorean theorem to see which fits.It seems that 2 is the only solution for MC.so $BC =4$.That is how I solved it. - I do not get it.why are we having different answers? –  user77646 May 13 '13 at 14:25 And it is high time I started telling people to use elementary geometry to solve my problems. –  user77646 May 13 '13 at 14:27 i made a mistake, forgot the $\frac 12$ in area of triange –  user31280 May 13 '13 at 15:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602029085159302, "perplexity": 686.3373846059917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097204.8/warc/CC-MAIN-20150627031817-00033-ip-10-179-60-89.ec2.internal.warc.gz"}
http://quant.stackexchange.com/questions/8478/close-form-for-stochastic-integral?answertab=active
# close form for stochastic integral I am new to stochastic calculus. Can I know how to compute the close-form solution for $$\int_0^t \exp(\alpha s - \sigma W_s) \; ds$$ and $$\int_0^t \exp(\alpha s - \sigma W_s) \; dW_s.$$ I encounter that when trying to solve for the following SDE $$dX_t = \theta(\mu - X_t)\; dt + \sigma X_t \; dW_t$$ - If the SDE is written correctly, that is not an Ornstein-Uhlenbeck process and your integrals don't seem to match it either. An O-U process has additive noise (i.e., diffusion function is not a function of the state variable) while the SDE as written has multiplicative noise. Also, an O-U process definitely does have a known analytical solution (see Doob, Ann. Math. 43, 1942). –  horchler Jul 16 '13 at 18:29 @n.c. Your comment isn't accurate unfortunately. As "horchler" pointed out, the Ornstein-Uhlenbeck process does NOT have multiplicative noise, unlike the process posted in this question. To appropriately solve this SDE, consider applying Ito's Lemma on $Y_t = ln(X_t)$ –  Mariam Aug 21 '13 at 16:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555674195289612, "perplexity": 599.0575129486875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445190.43/warc/CC-MAIN-20141017005725-00122-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/2x2-matrix-a-has-only-one-eigenvalue-l-with-eigenvector-v.374163/
# 2x2 matrix A has only one eigenvalue λ with eigenvector v 1. Jan 31, 2010 ### nlews This is a revision problem I have come across, I have completed the first few parts of it, but this is the last section and it seems entirely unrelated to the rest of the problem, and I can't get my head around it! Suppose that the 2x2 matrix A has only one eigenvalue λ with eigenvector v, and that w is a non zero vector which is not an eigenvector..show that: a) v and w are linearly independent b) the matrix with respect to the basis {v, w} is (λ c 0 λ) for some c =not to 0 c) for a suitable choice of w, c = 1 I am stuck. I know how to show that the eigenvalues are linearly independent, but how do I show that these two vectors are linearly independent to eachother? Last edited: Jan 31, 2010 2. Jan 31, 2010 ### tiny-tim If v and w are linearly dependent, then w is a multiple of v, so obviously w is also an eigenvector. Get some sleep! :zzz:​ 3. Jan 31, 2010 ### nlews Re: Eigenvalues/vectors ahh ok..so I can prove by contradiction! thank you that helps massively for part a!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137975573539734, "perplexity": 714.0154865082619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00335.warc.gz"}
http://math.stackexchange.com/questions/11601/proof-that-a-combination-is-an-integer
# Proof that a Combination is an integer From its definition a combination $(^n_k)$, is the number of distinct subsets of size k from a set of n elements. This is clearly an integer, however I was curious as to why the equation $\frac{n!}{k!(n-k)!}$ always evaluates to an integer. So far I figured: $n!$, is clearly divisible by $k!$, and $(n-k)!$, individually, but I could not seem to make the jump to proof that that $n!$ is divisible by their product. - You answered it in your first sentence. One way to show that something is an integer is to show that it counts something. So I guess you want a non-counting proof. –  Jonas Meyer Nov 23 '10 at 23:03 @Jonas the fact that $nCr$ relates to Pascal's Triangle is another answer. I wouldn't call it a proof though. –  Cole Johnson Jan 7 at 19:37 See my post here for a simple purely arithmetical proof that every binomial coefficient is an integer. The proof shows how to rewrite any binomial coefficient fraction as a product of fractions whose denominators are all coprime to any given prime $\rm\:p.\,$ This implies that no primes divide the denominator (when written in lowest terms), therefore the fraction is an integer. The key property that lies at the heart of this proof is that, among all products of $\rm\, n\,$ consecutive integers, $\rm\ n!\$ has the least possible power of $\rm\,p\,$ dividing it - for every prime $\rm\,p.\,$ Thus $\rm\ n!\$ divides every product of $\rm\:n\:$ consecutive integers, since it has a smaller power of every prime divisor. Therefore $$\rm\displaystyle\quad\quad {m \choose n}\ =\ \frac{m!/(m-n)!}{n!}\ =\ \frac{m\:(m-1)\:\cdots\:(m-n+1)}{\!\!n\:(n-1)\ \cdots\:\phantom{m-n}1\phantom{+1}}\ \in\ \mathbb Z$$ - Thanks, and sorry for the late reply/upvote :) –  Akusete Sep 3 '12 at 7:45 Well, one noncombinatorial way is to induct on $n$ using Pascal's triangle; that is, using the fact that ${n \choose k} = {n-1 \choose k - 1} + {n-1 \choose k}$ (easy to verify directly) and that each ${n - 1 \choose 0}$ is just $1$. - As Jonas mentioned, it counts something so it has to be a natural number. Another way is to notice that product of $m$ consecutive natural numbers is divisible by $m!$.(Prove this!) So if we write $n! = n(n-1)(n-2) \cdots (k+1) \times (k!)$, we find that $k!$ divides $k!$ and $n(n-1)(n-2) \cdots (k+1)$ is a product of $(n-k)$ consecutive natural numbers and hence $(n-k)!$ divides it. - I haven't thought too hard about this, but does there exist a direct proof (that does not rely on induction) of the fact that the product of $m$ consecutive numbers is divisible by $m!$? –  Vladimir Sotirov Nov 23 '10 at 23:38 @Vladimir: You could prove that the prime factors of the numerator is of higher power than that of the denominator. You can look at Bill's proof which he has posted in the next post. –  user17762 Nov 24 '10 at 0:07 @Vladimir: Generally, any proof (in Peano arithmetic) that some property is true for all integers must use induction. It may not explicitly invoke induction, e.g. the induction might be hidden way down some chain of lemmas. So it's not clear what it means for such a proof to "not rely on induction". –  Bill Dubuque Nov 24 '10 at 0:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467492699623108, "perplexity": 218.01432604879443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997901589.57/warc/CC-MAIN-20140722025821-00074-ip-10-33-131-23.ec2.internal.warc.gz"}
http://cms.math.ca/cmb/kw/Polarized%20manifold
On the nonemptiness of the adjoint linear system of polarized manifold Let $(X,L)$ be a polarized manifold over the complex number field with $\dim X=n$. In this paper, we consider a conjecture of M.~C.~Beltrametti and A.~J.~Sommese and we obtain that this conjecture is true if $n=3$ and $h^{0}(L)\geq 2$, or $\dim \Bs |L|\leq 0$ for any $n\geq 3$. Moreover we can generalize the result of Sommese. Keywords:Polarized manifold, adjoint bundleCategories:14C20, 14J99
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892733693122864, "perplexity": 209.3208640481377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021542591/warc/CC-MAIN-20140305121222-00056-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/103744/carbon-disulfide-a-poisonous-flammable-liquid-is-an-excellent-solvent-for-phosph-1
# Problem: Carbon disulfide, a poisonous, flammable liquid, is an excellent solvent for phosphorus, sulfur, and some other nonmetals. A kinetic study of its gaseous decomposition gave these data:Calculate the average value of the rate constant. ⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet. ###### Problem Details Carbon disulfide, a poisonous, flammable liquid, is an excellent solvent for phosphorus, sulfur, and some other nonmetals. A kinetic study of its gaseous decomposition gave these data: Calculate the average value of the rate constant.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393371343612671, "perplexity": 4875.477317293462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906214.53/warc/CC-MAIN-20200710050953-20200710080953-00326.warc.gz"}
https://chem.libretexts.org/Under_Construction/Purgatory/Book%3A_Analytical_Chemistry_2.0_(Harvey)/12_Chromatographic_and_Electrophoretic_Methods/12.7%3A_Electrophoresis
# 12.7: Electrophoresis Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. Cations with larger charge-to-size ratios—which favors ions of larger charge and of smaller size—migrate at a faster rate than larger cations with smaller charges. Anions migrate toward the positively charged anode and neutral species do not experience the electrical field and remain stationary. There are several forms of electrophoresis. In slab gel electrophoresis the conducting buffer is retained within a porous gel of agarose or polyacrylamide. Slabs are formed by pouring the gel between two glass plates separated by spacers. Typical thicknesses are 0.25–1 mm. Gel electrophoresis is an important technique in biochemistry where it is frequently used for separating DNA fragments and proteins. Although it is a powerful tool for the qualitative analysis of complex mixtures, it is less useful for quantitative work. In capillary electrophoresis, the conducting buffer is retained within a capillary tube whose inner diameter is typically 25–75 μm. Samples are injected into one end of the capillary tube. As the sample migrates through the capillary its components separate and elute from the column at different times. The resulting electropherogram looks similar to a GC or an HPLC chromatogram, providing both qualitative and quantitative information. Only capillary electrophoretic methods receive further consideration in this section. As we will see shortly, under normal conditions even neutral species and anions migrate toward the cathode. ## 12.7.1 Theory of Capillary Electrophoresis In capillary electrophoresis we inject the sample into a buffered solution retained within a capillary tube. When an electric field is applied across the capillary tube, the sample’s components migrate as the result of two types of action: electrophoretic mobility and electroosmotic mobility. Electrophoretic mobility is the solute’s response to the applied electrical field. As described earlier, cations move toward the negatively charged cathode, anions move toward the positively charged anode, and neutral species remain stationary. The other contribution to a solute’s migration is electroosmotic flow, which occurs when the buffer moves through the capillary in response to the applied electrical field. Under normal conditions the buffer moves toward the cathode, sweeping most solutes, including the anions and neutral species, toward the negatively charged cathode. ### Electrophoretic Mobility The velocity with which a solute moves in response to the applied electric field is called its electrophoretic velocity, νep; it is defined as $ν_\ce{ep}= \mu_\ce{ep}E \label{12.34}$ where μep is the solute’s electrophoretic mobility, and E is the magnitude of the applied electrical field. A solute’s electrophoretic mobility is defined as $\mu_\ce{ep} = \dfrac{q}{6\pi ηr } \label{12.35}$ where • q is the solute’s charge, • η is the buffer viscosity, and • r is the solute’s radius. Using Equation \ref{12.34} and Equation \ref{12.35} we can make several important conclusions about a solute’s electrophoretic velocity. Electrophoretic mobility and, therefore, electrophoretic velocity, increases for more highly charged solutes and for solutes of smaller size. Because q is positive for a cation and negative for an anion, these species migrate in opposite directions. Neutral species, for which q is zero, have an electrophoretic velocity of zero. ### Electroosmotic Mobility When an electrical field is applied to a capillary filled with an aqueous buffer we expect the buffer’s ions to migrate in response to their electrophoretic mobility. Because the solvent, H2O, is neutral we might reasonably expect it to remain stationary. What we observe under normal conditions, however, is that the buffer solution moves towards the cathode. This phenomenon is called the electroosmotic flow. Electroosmotic flow occurs because the walls of the capillary tubing are electrically charged. The surface of a silica capillary contains large numbers of silanol groups (–SiOH). At pH levels greater than approximately 2 or 3, the silanol groups ionize to form negatively charged silanate ions (–SiO). Cations from the buffer are attracted to the silanate ions. As shown in Figure 12.56, some of these cations bind tightly to the silanate ions, forming a fixed layer. Because the cations in the fixed layer only partially neutralize the negative charge on the capillary walls, the solution adjacent to the fixed layer—what we call the diffuse layer—contains more cations than anions. Together these two layers are known as the double layer. Cations in the diffuse layer migrate toward the cathode. Because these cations are solvated, the solution is also pulled along, producing the electroosmotic flow. The anions in the diffuse layer, which also are solvated, try to move toward the anode. Because there are more cations than anions, however, the cations win out and the electroosmotic flow moves in the direction of the cathode. Figure 12.56: Schematic diagram showing the origin of the double layer within a capillary tube. Although the net charge within the capillary is zero, the distribution of charge is not. The walls of the capillary have an excess of negative charge, which decreases across the fixed layer and the diffuse layer, reaching a value of zero in bulk solution. The rate at which the buffer moves through the capillary, what we call its electroosmotic flow velocity, νeof, is a function of the applied electric field, E, and the buffer’s electroosmotic mobility, μeof. $\nu_\ce{eof} = \mu_\ce{eof}E \label{12.36}$ Electroosmotic mobility is defined as $\mu_\ce{eof} = \dfrac{εζ}{4πη} \label{12.37}$ where • ε is the buffer dielectric constant, • ζ is the zeta potential, and • η is the buffer viscosity. The zeta potential—the potential of the diffuse layer at a finite distance from the capillary wall—plays an important role in determining the electroosmotic flow velocity. Two factors determine the zeta potential’s value. First, the zeta potential is directly proportional to the charge on the capillary walls, with a greater density of silanate ions corresponding to a larger zeta potential. Below a pH of 2 there are few silanate ions, and the zeta potential and electroosmotic flow velocity are zero. As the pH increases, both the zeta potential and the electroosmotic flow velocity increase. Second, the zeta potential is directly proportional to the thickness of the double layer. Increasing the buffer’s ionic strength provides a higher concentration of cations, decreasing the thickness of the double layer and decreasing the electroosmotic flow. Zeta Potential The definition of zeta potential given here is admittedly a bit fuzzy. For a much more technical explanation see Delgado, A. V.; González-Caballero, F.; Hunter, R. J.; Koopal, L. K.; Lyklema, J. “Measurement and Interpretation of Electrokinetic Phenomena,” Pure. Appl. Chem. 2005, 77, 1753–1805. Although this a very technical report, Sections 1.3–1.5 provide a good introduction to the difficulty of defining the zeta potential and measuring its value. The electroosmotic flow profile is very different from that of a fluid moving under forced pressure. Figure 12.57 compares the electroosmotic flow profile with that the hydrodynamic flow profile in gas chromatography and liquid chromatography. The uniform, flat profile for electroosmosis helps minimize band broadening in capillary electrophoresis, improving separation efficiency. Figure 12.57: Comparison of hydrodynamic flow and electroosmotic flow. The nearly uniform electroosmotic flow profile means that the electroosmotic flow velocity is nearly constant across the capillary. ### Total Mobility A solute’s total velocity, $$v_{tot}$$, as it moves through the capillary is the sum of its electrophoretic velocity and the electroosmotic flow velocity. $ν_\ce{tot} =ν_\ce{ep} + ν_\ce{eof}$ As shown in Figure 12.58, under normal conditions the following general relationships hold true. $(ν_\ce{tot})_\ce{cations} > ν_\ce{eof}$ $(ν_\ce{tot})_\ce{neutrals} = ν_\ce{eof}$ $(ν_\ce{tot})_\ce{anions} < ν_\ce{eof}$ Cations elute first in an order corresponding to their electrophoretic mobilities, with small, highly charged cations eluting before larger cations of lower charge. Neutral species elute as a single band with an elution rate equal to the electroosmotic flow velocity. Finally, anions are the last components to elute, with smaller, highly charged anions having the longest elution time. Figure 12.58: Visual explanation for the general elution order in capillary electrophoresis. Each species has the same electroosmotic flow, νeof. Cations elute first because they have a positive electrophoretic velocity, νep. Anions elute last because their negative electrophoretic velocity partially offsets the electroosmotic flow velocity. Neutrals elute with a velocity equal to the electroosmotic flow. ### Migration Time Another way to express a solute’s velocity is to divide the distance it travels by the elapsed time $\nu_{tot}=\frac{l}{t_{m}} \label{12.38}$ where l is the distance between the point of injection and the detector, and tm is the solute’s migration time. To understand the experimental variables affecting migration time, we begin by noting that $ν_\ce{tot} = \mu_\ce{tot}E= (\mu_\ce{ep} + \mu_\ce{eof})E\label{12.39}$ Combining Equations \ref{12.38} and \ref{12.39} and solving for tm leaves us with $t_\ce{m} = \dfrac{l}{(\mu_\ce{ep} + \mu_\ce{eof})E}\label{12.40}$ Finally, the magnitude of the electrical field is $E = \dfrac{V}{L}\label{12.41}$ where V is the applied potential and L is the length of the capillary tube. Finally, substituting Equation \ref{12.41} into Equation \ref{12.40} leaves us with the following equation for a solute’s migration time. $t_\ce{m}= \dfrac{lL}{(\mu_\ce{ep} + \mu_\ce{eof})V}\label{12.42}$ To decrease a solute’s migration time—and shorten the analysis time—we can apply a higher voltage or use a shorter capillary tube. We can also shorten the migration time by increasing the electroosmotic flow, although this decreases resolution. ### Efficiency As we learned in Section 12.2.4, the efficiency of a separation is given by the number of theoretical plates, N. In capillary electrophoresis the number of theoretic plates is $N = \dfrac{l^2}{2Dt_\ce{m}} = \dfrac{(\mu_\ce{ep} + \mu_\ce{eof})Vl}{2DL} \label{12.43b}$ where $$D$$ is the solute’s diffusion coefficient. From Equations \ref{12.10} and \ref{12.11}, we know that the number of theoretical plates for a solute is $N = \dfrac{l^2}{\sigma ^2}$ where l is the distance the solute travels and σ is the standard deviation for the solute’s band broadening. For capillary electrophoresis band broadening is due to longitudinal diffusion and is equivalent to 2Dtm, where tm is the migration time. From Equation \ref{12.43}, the efficiency of a capillary electrophoretic separation increases with higher voltages. Increasing the electroosmotic flow velocity improves efficiency, but at the expense of resolution. Two additional observations deserve comment. First, solutes with larger electrophoretic mobilities—in the same direction as the electroosmotic flow—have greater efficiencies; thus, smaller, more highly charged cations are not only the first solutes to elute, but do so with greater efficiency. Second, efficiency in capillary electrophoresis is independent of the capillary’s length. Theoretical plate counts of approximately 100,000–200,000 are not unusual. It is possible to design an electrophoretic experiment so that anions elute before cations—more about this later—in which smaller, more highly charged anions elute with greater efficiencies. ### Selectivity In chromatography we defined the selectivity between two solutes as the ratio of their retention factors (Equation \ref{12.9}). In capillary electrophoresis the analogous expression for selectivity is $α = \dfrac{\mu_\textrm{ep,1}}{\mu_\textrm{ep,2}}$ where μep,1 and μep,2 are the electrophoretic mobilities for the two solutes, chosen such that α ≥ 1. We can often improve selectivity by adjusting the pH of the buffer solution. For example, NH4+ is a weak acid with a pKa of 9.75. At a pH of 9.75 the concentrations of NH4+ and NH3 are equal. Decreasing the pH below 9.75 increases its electrophoretic mobility because a greater fraction of the solute is present as the cation NH4+. On the other hand, raising the pH above 9.75 increases the proportion of the neutral NH3, decreasing its electrophoretic mobility. ### Resolution The resolution between two solutes is $R = \dfrac{0.177(\mu_\textrm{ep,1} - \mu_\textrm{ep,2})\sqrt{V}}{\sqrt{D(\mu_\textrm{avg} - \mu_\textrm{eof})}}\label{12.44}$ where μavg is the average electrophoretic mobility for the two solutes. Increasing the applied voltage and decreasing the electroosmotic flow velocity improves resolution. The latter effect is particularly important. Although increasing electroosmotic flow improves analysis time and efficiency, it decreases resolution. ## 12.7.2 Instrumentation The basic instrumentation for capillary electrophoresis is shown in Figure 12.59 and includes a power supply for applying the electric field, anode and cathode compartments containing reservoirs of the buffer solution, a sample vial containing the sample, the capillary tube, and a detector. Each part of the instrument receives further consideration in this section. Figure 12.59: Schematic diagram of the basic instrumentation for capillary electrophoresis. The sample and the source reservoir are switched when making injections. ### Capillary Tubes Figure 12.60 shows a cross-section of a typical capillary tube. Most capillary tubes are made from fused silica coated with a 15–35 μm layer of polyimide to give it mechanical strength. The inner diameter is typically 25–75 μm—smaller than the internal diameter of a capillary GC column—with an outer diameter of 200–375 μm. Figure 12.60: Cross section of a capillary column for capillary electrophoresis. The dimensions shown here are typical and are scaled proportionally. The capillary column’s narrow opening and the thickness of its walls are important. When an electric field is applied to the buffer solution within the capillary, current flows through the capillary. This current leads to the release of heat—what we call Joule heating. The amount of heat released is proportional to the capillary’s radius and the magnitude of the electrical field. Joule heating is a problem because it changes the buffer solution’s viscosity, with the solution at the center of the capillary being less viscous than that near the capillary walls. Because a solute’s electrophoretic mobility depends on viscosity (Equation \ref{12.35}), solute species in the center of the capillary migrate at a faster rate than those near the capillary walls. The result is an additional source of band broadening that degrades the separation. Capillaries with smaller inner diameters generate less Joule heating, and capillaries with larger outer diameters are more effective at dissipating the heat. Placing the capillary tube inside a thermostated jacket is another method for minimizing the effect of Joule heating; in this case a smaller outer diameter allows for a more rapid dissipation of thermal energy. ### Injecting the Sample There are two commonly used method for injecting a sample into a capillary electrophoresis column: hydrodynamic injection and electrokinetic injection. In both methods the capillary tube is filled with the buffer solution. One end of the capillary tube is placed in the destination reservoir and the other end is placed in the sample vial. Hydrodynamic injection uses pressure to force a small portion of sample into the capillary tubing. A difference in pressure is applied across the capillary by either pressurizing the sample vial or by applying a vacuum to the destination reservoir. The volume of sample injected, in liters, is given by the following equation $V_\ce{inj}= \dfrac{Pd^4πt}{128ηL} \times 10^3\label{12.45}$ where ∆P is the difference in pressure across the capillary in pascals, d is the capillary’s inner diameter in meters, t is the amount of time that the pressure is applied in seconds, η is the buffer’s viscosity in kg m–1 s–1, and L is the length of the capillary tubing in meters. The factor of 103 changes the units from cubic meters to liters. For a hydrodynamic injection we move the capillary from the source reservoir to the sample. The anode remains in the source reservoir. A hydrodynamic injection is also possible by raising the sample vial above the destination reservoir and briefly inserting the filled capillary. If you want to verify the units in Equation \ref{12.45}, recall from Table 2.2 that 1 Pa is equivalent to 1 kg m-1 s-2. Example 12.9 In a hydrodynamic injection we apply a pressure difference of 2.5 × 103 Pa (a ∆P ≈ 0.02 atm) for 2 s to a 75-cm long capillary tube with an internal diameter of 50 μm. Assuming that the buffer’s viscosity is 10–3 kg m–1 s–1, what volume and length of sample did we inject? Solution Making appropriate substitutions into equation 12.45 gives the sample’s volume as \begin{align} V_\ce{inj} &= \mathrm{\dfrac{(2.5×10^3\: kg\: m^{−1}\: s^{−2})(50×10^{−6}\: m)^4(3.14)(2\: s)}{(128)(0.001\: kg\: m^{−1}\: s^{−1})(0.75\:m)} × 10^3\: L/m^3}\\ V_\ce{inj} &= \mathrm{1×10^{−9}\: L = 1\: nL} \end{align} Because the interior of the capillary is cylindrical, the length of the sample, l, is easy to calculate using the equation for the volume of a cylinder; thus $l = \dfrac{V_\ce{inj}}{πr^2} = \mathrm{\dfrac{(1.0×10^{−9}\: L)(10^{−3}\: m^3/L)}{(3.14)(25×10^{−6}\: m)^2} = 5×10^{−4}\: m = 0.5\: mm}$ Exercise 12.9 Suppose that you need to limit your injection to less than 0.20% of the capillary’s length. Using the information from Example 12.9, what is the maximum injection time for a hydrodynamic injection? In an electrokinetic injection we place both the capillary and the anode into the sample and briefly apply an potential. The volume of injected sample is the product of the capillary’s cross sectional area and the length of the capillary occupied by the sample. In turn, this length is the product of the solute’s velocity (see equation 12.39) and time; thus $l = \dfrac{V_\ce{inj}}{πr^2} = \mathrm{\dfrac{(1.0×10^{−9}\: L)(10^{−3}\: m^3/L)}{(3.14)(25×10^{−6}\: m)^2} = 5×10^{−4}\: m = 0.5\: mm}$ where • $$r$$ is the capillary’s radius, • $$L$$ is the length of the capillary, and • $$E′$$ is effective electric field in the sample. An important consequence of equation 12.46 is that an electrokinetic injection is inherently biased toward solutes with larger electrophoretic mobilities. If two solutes have equal concentrations in a sample, we inject a larger volume—and thus more moles—of the solute with the larger μep. The electric field in the sample is different that the electric field in the rest of the capillary because the sample and the buffer have different ionic compositions. In general, the sample’s ionic strength is smaller, which makes its conductivity smaller. The effective electric field is $E′ = E \times \dfrac{κ_\ce{buf}}{κ_\ce{sam}}$ where κbuf and κsam are the conductivities of the buffer and the sample, respectively. When an analyte’s concentration is too small to detect reliably, it may be possible to inject it in a manner that increases its concentration in the capillary tube. This method of injection is called stacking. Stacking is accomplished by placing the sample in a solution whose ionic strength is significantly less than that of the buffer in the capillary tube. Because the sample plug has a lower concentration of buffer ions, the effective field strength across the sample plug, E′ is larger than that in the rest of the capillary. We know from equation 12.34 that electrophoretic velocity is directly proportional to the electrical field. As a result, the cations in the sample plug migrate toward the cathode with a greater velocity, and the anions migrate more slowly—neutral species are unaffected and move with the electroosmotic flow. When the ions reach their respective boundaries between the sample plug and the buffering solution, the electrical field decreases and the electrophoretic velocity of cations decreases and that for anions increases. As shown in Figure 12.61, the result is a stacking of cations and anions into separate, smaller sampling zones. Over time, the buffer within the capillary becomes more homogeneous and the separation proceeds without additional stacking. Figure 12.61 The stacking of cations and anions. The top diagram shows the initial sample plug and the bottom diagram shows how the cations and anions become concentrated at opposite sides of the sample plug. ### Applying the Electrical Field Migration in electrophoresis occurs in response to an applied electrical field. The ability to apply a large electrical field is important because higher voltages lead to shorter analysis times (see equation 12.42), more efficient separations (equation 12.43), and better resolution (equation 12.44). Because narrow bored capillary tubes dissipate Joule heating so efficiently, voltages of up to 40 kV are possible. Because of the high voltages, be sure to follow your instrument’s safety guidelines. ### Detectors Most of the detectors used in HPLC also find use in capillary electrophoresis. Among the more common detectors are those based on the absorption of UV/Vis radiation, fluorescence, conductivity, amperometry, and mass spectrometry. Whenever possible, detection is done “on-column” before the solutes elute from the capillary tube and additional band broadening occurs. UV/Vis detectors are among the most popular. Because absorbance is directly proportional to path length, the capillary tubing’s small diameter leads to signals that are smaller than those obtained in HPLC. Several approaches have been used to increase the pathlength, including a Z-shaped sample cell and multiple reflections (see Figure 12.62). Detection limits are about 10–7 M. Figure 12.62: Two approaches to on-column detection in capillary electrophoresis using a UV/Vis diode array spectrometer: (a) Z-shaped bend in capillary, and (b) multiple reflections. Better detection limits are obtained using fluorescence, particularly when using a laser as an excitation source. When using fluorescence detection a small portion of the capillary’s protective coating is removed and the laser beam is focused on the inner portion of the capillary tubing. Emission is measured at an angle of 90o to the laser. Because the laser provides an intense source of radiation that can be focused to a narrow spot, detection limits are as low as 10–16 M. Solutes that do not absorb UV/Vis radiation or that do not undergo fluorescence can be detected by other detectors. Table 12.10 provides a list of detectors for capillary electrophoresis along with some of their important characteristics. Table 12.10 Characteristics of Detectors for Capillary Electrophoresis detection limit detector selectivity universal or analyte must... moles injected molarity on-column detection? UV/Vis absorbance have a UV/Vis chromophore 10–13–10–16 10–5–10–7 yes indirect absorbance universal 10–12–10–15 10–4–10–6 yes fluorescence have a favorable quantum yield 10–15–10–17 10–7–10–9 yes laser fluorescence have a favorable quantum yield 10–18–10–20 10–13–10–16 yes mass spectrometer universal (total ion) selective (single ion) 10–16–10–17 10–8–10–10 no amperometry undergo oxidation or reduction 10–18–10–19 10–7–10–10 no conductivity universal 10–15–10–16 10–7–10–9 no Source: Baker, D. R. Capillary Electrophoresis, Wiley-Interscience: New York, 1995. ## 12.7.3 Capillary Electrophoresis Methods There are several different forms of capillary electrophoresis, each of which has its particular advantages. Four of these methods are briefly described in this section. ### Capillary Zone Electrophoresis (CZE) The simplest form of capillary electrophoresis is capillary zone electrophoresis. In CZE we fill the capillary tube with a buffer solution and, after loading the sample, place the ends of the capillary tube in reservoirs containing additional buffer solution. Usually the end of the capillary containing the sample is the anode and solutes migrate toward the cathode at a velocity determined by their electrophoretic mobility and the electroosmotic flow. Cations elute first, with smaller, more highly charged cations eluting before larger cations with smaller charges. Neutral species elute as a single band. Anions are the last species to elute, with smaller, more negatively charged anions being the last to elute. We can reverse the direction of electroosmotic flow by adding an alkylammonium salt to the buffer solution. As shown in Figure 12.63, the positively charged end of the alkyl ammonium ions bind to the negatively charged silanate ions on the capillary’s walls. The tail of the alkyl ammonium ion is hydrophobic and associates with the tail of another alkyl ammonium ion. The result is a layer of positive charges that attract anions in the buffer solution. The migration of these solvated anions toward the anode reverses the electroosmotic flow’s direction. The order of elution is exactly opposite of that observed under normal conditions. Figure 12.63 Two modes of capillary zone electrophoresis showing (a) normal migration with electroosmotic flow toward the cathode and (b) reversed migration in which the electroosmotic flow is toward the anode. Coating the capillary’s walls with a nonionic reagent eliminates the electroosmotic flow. In this form of CZE the cations migrate from the anode to the cathode. Anions elute into the source reservoir and neutral species remain stationary. Capillary zone electrophoresis provides effective separations of charged species, including inorganic anions and cations, organic acids and amines, and large biomolecules such as proteins. For example, CZE has been used to separate a mixture of 36 inorganic and organic ions in less than three minutes.15 A mixture of neutral species, of course, can not be resolved. ### Micellar Electrokinetic Capillary Chromatography (MEKC) One limitation to CZE is its inability to separate neutral species. Micellar electrokinetic capillary chromatography overcomes this limitation by adding a surfactant, such as sodium dodecylsulfate (Figure 12.64a) to the buffer solution. Sodium dodecylsulfate, or SDS, has a long-chain hydrophobic tail and a negatively charged ionic functional group at its head. When the concentration of SDS is sufficiently large a micelle forms. A micelle consists of a spherical agglomeration of 40–100 surfactant molecules in which the hydrocarbon tails point inward and the negatively charged heads point outward (Figure 12.64b). Figure 12.64: (a) Structure of sodium dodecylsulfate and its representation, and (b) cross section through a micelle showing its hydrophobic interior and its hydrophilic exterior. Because micelles have a negative charge, they migrate toward the cathode with a velocity less than the electroosmotic flow velocity. Neutral species partition themselves between the micelles and the buffer solution in a manner similar to the partitioning of solutes between the two liquid phases in HPLC. Because there is a partitioning between two phases, we include the descriptive term chromatography in the techniques name. Note that in MEKC both phases are mobile. The elution order for neutral species in MEKC depends on the extent to which each partitions into the micelles. Hydrophilic neutrals are insoluble in the micelle’s hydrophobic inner environment and elute as a single band, as they would in CZE. Neutral solutes that are extremely hydrophobic are completely soluble in the micelle, eluting with the micelles as a single band. Those neutral species that exist in a partition equilibrium between the buffer solution and the micelles elute between the completely hydrophilic and completely hydrophobic neutral species. Those neutral species favoring the buffer solution elute before those favoring the micelles. Micellar electrokinetic chromatography has been used to separate a wide variety of samples, including mixtures of pharmaceutical compounds, vitamins, and explosives. ### Capillary Gel Electrophoresis (CGE) In capillary gel electrophoresis the capillary tubing is filled with a polymeric gel. Because the gel is porous, a solute migrates through the gel with a velocity determined both by its electrophoretic mobility and by its size. The ability to effect a separation using size is helpful when the solutes have similar electrophoretic mobilities. For example, fragments of DNA of varying length have similar charge-to-size ratios, making their separation by CZE difficult. Because the DNA fragments are of different size, a CGE separation is possible. The capillary used for CGE is usually treated to eliminate electroosmotic flow, preventing the gel’s extrusion from the capillary tubing. Samples are injected electrokinetically because the gel provides too much resistance for hydrodynamic sampling. The primary application of CGE is the separation of large biomolecules, including DNA fragments, proteins, and oligonucleotides. ### Capillary Electrochromatography (CEC) Another approach to separating neutral species is capillary electrochromatography. In CEC the capillary tubing is packed with 1.5–3 μm particles coated with a bonded stationary phase. Neutral species separate based on their ability to partition between the stationary phase and the buffer, which is moving as a result of the electroosmotic flow; Figure 12.65 provides a representative example for the separation of a mixture of hydrocarbons. A CEC separation is similar to the analogous HPLC separation, but without the need for high pressure pumps. Efficiency in CEC is better than in HPLC, and analysis times are shorter. Figure 12.65: Capillary electrochromatographic separation of a mixture of hydrocarbons in DMSO. The column contains a porous polymer of butyl methacrylate and lauryl acrylate (25%:75% mol:mol) with butane dioldacrylate as a crosslinker. Data provided by Zoe LaPier and Michelle Bushey, Department of Chemistry, Trinity University. The best way to appreciate the theoretical and practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of a vitamin B complex by capillary zone electrophoresis or by micellar electrokinetic capillary chromatography provides an instructive example of a typical procedure. The description here is based on Smyth, W. F. Analytical Chemistry of Complex Matrices, Wiley Teubner: Chichester, England, 1996, pp. 154–156. Representative Method 12.3: Determination of a Vitamin B Complex by CZE or MEKC Description of Method The water soluble vitamins B1 (thiamine hydrochloride), B2 (riboflavin), B3 (niacinamide), and B6 (pyridoxine hydrochloride) are determined by CZE using a pH 9 sodium tetraborate-sodium dihydrogen phosphate buffer or by MEKC using the same buffer with the addition of sodium dodecyl sulfate. Detection is by UV absorption at 200 nm. An internal standard of o-ethoxybenzamide is used to standardize the method. Procedure Crush a vitamin B complex tablet and place it in a beaker with 20.00 mL of a 50 % v/v methanol solution that is 20 mM in sodium tetraborate and 100.0 ppm in o-ethoxybenzamide. After mixing for 2 min to ensure that the B vitamins are dissolved, pass a 5.00-mL portion through a 0.45-μm filter to remove insoluble binders. Load an approximately 4 nL sample into a capillary column with an inner diameter of a 50 μm. For CZE the capillary column contains a 20 mM pH 9 sodium tetraborate-sodium dihydrogen phosphate buffer. For MEKC the buffer is also 150 mM in sodium dodecyl sulfate. Apply a 40 kV/m electrical field to effect both the CZE and MEKC separations. Questions 1. Methanol, which elutes at 4.69 min, is included as a neutral species to indicate the electroosmotic flow. When using standard solutions of each vitamin, CZE peaks are found at 3.41 min, 4.69 min, 6.31 min, and 8.31 min. Examine the structures and pKa information in Figure 12.66 and identify the order in which the four B vitamins elute. Vitamin B1 is a cation and elutes before the neutral species methanol; thus it is the compound that elutes at 3.41 min. Vitamin B3 is a neutral species and elutes with methanol at 4.69 min. The remaining two B vitamins are weak acids that partially ionize to weak base anions in the pH 9 buffer. Of the two, vitamin B6 is the stronger acid (a pKa of 9.0 versus a pKa of 9.7) and is present to a greater extent in its anionic form. Vitamin B6, therefore, is the last of the vitamins to elute. 2. The order of elution when using MEKC is vitamin B3 (5.58 min), vitamin B6 (6.59 min), vitamin B2 (8.81 min), and vitamin B1 (11.21 min). What conclusions can you make about the solubility of the B vitamins in the sodium dodecylsulfate micelles? The micelles elute at 17.7 min. The elution time for vitamin B1 shows the greatest change, increasing from 3.41 min to 11.21 minutes. Clearly vitamin B1 has the greatest solubility in the micelles. Vitamin B2 and vitamin B3 have a more limited solubility in the micelles, showing only slightly longer elution times in the presence of the micelles. Interestingly, the elution time for vitamin B6 decreases in the presence of the micelles. 3. For quantitative work an internal standard of o-ethoxybenzamide is added to all samples and standards. Why is an internal standard necessary? Although the method of injection is not specified, neither a hydrodynamic injection nor an electrokinetic injection is particularly reproducible. The use of an internal standard compensates for this limitation. (You can read more about the use of internal standards in capillary electrophoresis in the following paper: Altria, K. D. “Improved Performance in Capillary Electrophoresis Using Internal Standards,” LC.GC Europe, September 2002.) Figure 12.66: Structures of the four water soluble B vitamins in their predominate forms at a pH of 9; pKa values are shown in red. ## 12.7.4 Evaluation When compared to GC and HPLC, capillary electrophoresis provides similar levels of accuracy, precision, and sensitivity, and a comparable degree of selectivity. The amount of material injected into a capillary electrophoretic column is significantly smaller than that for GC and HPLC—typically 1 nL versus 0.1 μL for capillary GC and 1–100 μL for HPLC. Detection limits for capillary electrophoresis, however, are 100–1000 times poorer than that for GC and HPLC. The most significant advantages of capillary electrophoresis are improvements in separation efficiency, time, and cost. Capillary electrophoretic columns contain substantially more theoretical plates (≈106 plates/m) than that found in HPLC (≈105 plates/m) and capillary GC columns (≈103 plates/m), providing unparalleled resolution and peak capacity. Separations in capillary electrophoresis are fast and efficient. Furthermore, the capillary column’s small volume means that a capillary electrophoresis separation requires only a few microliters of buffer solution, compared to 20–30 mL of mobile phase for a typical HPLC separation. Note See Section 12.4.8 for an evaluation of gas chromatography, and Section 12.5.6 for an evaluation of high-performance liquid chromatography.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237160444259644, "perplexity": 3464.7260757506037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00153.warc.gz"}
https://www.dsprelated.com/freebooks/sasp/Gaussian_Window_Transform.html
## Gaussian Window and Transform The Gaussian bell curve'' is possibly the only smooth, nonzero function, known in closed form, that transforms to itself.4.15 (4.55) It also achieves the minimum time-bandwidth product (4.56) when width'' is defined as the square root of its second central moment. For even functions , (4.57) Since the true Gaussian function has infinite duration, in practice we must window it with some usual finite window, or truncate it. Depalle [58] suggests using a triangular window raised to some power for this purpose, which preserves the absence of side lobes for sufficiently large . It also preserves non-negativity of the transform. ### Matlab for the Gaussian Window In matlab, w = gausswin(M,alpha) returns a length window with parameter where is defined, as in Harris [101], so that the window shape is invariant with respect to window length : function [w] = gausswin(M,alpha) n = -(M-1)/2 : (M-1)/2; w = exp((-1/2) * (alpha * n/((M-1)/2)) .^ 2)'; An implementation in terms of unnormalized standard deviation (sigma in samples) is as follows: function [w] = gaussianwin(M,sigma) n= -(M-1)/2 : (M-1)/2; w = exp(-n .* n / (2 * sigma * sigma))'; In this case, sigma would normally be specified as a fraction of the window length (sigma = M/8 in the sample below). Note that, on a dB scale, Gaussians are quadratic. This means that parabolic interpolation of a sampled Gaussian transform is exact. This can be a useful fact to remember when estimating sinusoidal peak frequencies in spectra. For example, one suggested implication is that, for typical windows, quadratic interpolation of spectral peaks may be more accurate on a log-magnitude scale (e.g., dB) than on a linear magnitude scale (this has been observed empirically for a variety of cases). ### Gaussian Window and Transform Figure 3.36 shows an example length Gaussian window and its transform. The sigma parameter was set to so that simple truncation of the Gaussian yields a side-lobe level better than dB. Also overlaid on the window transform is a parabola; we see that the main lobe is well fit by the parabola until the side lobes begin. Since the transform of a Gaussian is a Gaussian (exactly), the side lobes are entirely caused by truncating the window. More properties and applications of the Gaussian function can be found in Appendix D. ### Exact Discrete Gaussian Window It can be shown [44] that (4.58) where is the time index, and is the frequency index for a length (even) normalized DFT (DFT divided by ). In other words, the Normalized DFT (NDFT) of this particular sampled Gaussian pulse is exactly the complex-conjugate of the same Gaussian pulse. (The proof is nontrivial.) Next Section: Optimized Windows Previous Section: Dolph-Chebyshev Window
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808462619781494, "perplexity": 1843.439941133138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259316.74/warc/CC-MAIN-20190526145334-20190526171334-00544.warc.gz"}
http://mathoverflow.net/questions/47807/on-existence-of-matrices-x-y-s-t-xay-is-diagonal-over-non-commutative-ring
# on existence of matrices X, Y s.t. XAY is diagonal over non-commutative ring Given $A\in Mat_{n\times n}(R)$ where $R$ is a non-commutative associative ring are there exist any (non-zero) matrices $X, Y\in Mat_{n\times n}(R)$ such that $XAY=diag(a_1, \ldots , a_n)$ for some $a_i$? The interesting answer for me is if $A=(x_{i,j})$ and $R=\mathbb Z [x_{i,j}]$ (free associative non-commutative algebra on $x_{i,j}$ over $\mathbb Z$). For example if $R$ is commutative then we put $X=Id$, $Y=(det(A_{i,j}))$ and get $XAY=det A\cdot Id$. What about non-commutative polynomials? Upd1: I want to have a non-commutative polynomial equality. Also I want $X$ and $Y$ to be in general invertible. Upd2: Ok, I've understood that Update1 wasn't correct. I'm interested in having such matrices over $R=Mat_{m\times m}(A)$ where $A$ is a commutative ring with $1$. - If the ring has a unity, then it is easy: just choose $a_{ij} \neq 0$ then $$e_{1i}Ae_{j1}=a_{ij}e_{11}$$ where $e_{kl}$ is the matrix with $1$ in the $(k,l)$ entry and zero elsewhere. If the ring does have $1$, the statement is not generally true, since you can define $xy=0$ for all $x,y \in R$. – Keivan Karai Nov 30 '10 at 17:11 You want $X$ and $Y$ invertible or at least non-zero divisors. Otherwise the answer is no for obvious reasons. – Andreas Thom Nov 30 '10 at 19:12 @Andreas: I want to have a generalization of the equality: $A A^{V}=det A\cdot Id$. – zroslav Nov 30 '10 at 19:31 Take $X=Y=E_{1,1}$ (the matrix unit). Then $XAY=x_{1,1}E_{1,1}$, a diagonal matrix. If $R$ does not have 1, take $X=Y=aE_{1,1}$ for any $a\ne 0\in R$. Then $XAY=ax_{1,1}aE_{1,1}$ (it may be a zero matrix, but zero matrix is diagonal). Update. Since you now want to find invertible $X,Y$, I would recommend starting with $2\times 2$-matrices and reading the book by Cohn, "Free rings and their relations", especially Chapter 2, Section 2.6. I've understood that if $R=Mat_n(K)$ then for every $A\in Mat_m(R)$ exist $B\in Mat_m(R)$, s.t. $AB=\lambda Id$. $A=(a_{ij,kl})$ is a $mn\times mn$-matrix over $K$ and $B=A^{V}$ is a $m\times m$-matrix over $R$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279824495315552, "perplexity": 211.22107209069767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161718.0/warc/CC-MAIN-20160205193921-00161-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.vistrails.org/index.php?title=User:Tohline/SSC/Structure/PowerLawDensity&diff=5163&oldid=5162
# Power-Law Density Distributions Here we begin with the same second-order, one-dimensional ODE that governs the structure of polytropic spheres, namely, the Lane-Emden Equation $\frac{1}{\xi^2} \frac{d}{d\xi}\biggl( \xi^2 \frac{d\Theta_H}{d\xi} \biggr) = - \Theta_H^n$ , and examine whether or not this governing relation can be satisfied by a power-law enthalpy distribution of the form, $\Theta_H = A \xi^{-\alpha} ,$ where $A$ and $\alpha$ are assumed to be constants. We note, up front, that such a solution will not satisfy the boundary conditions that are imposed on polytropic spheres. But the simplistic form of a power-law solution can nevertheless sometimes be instructive. ## Derivation Plugging the power-law expression for the dimensionless enthalpy into both sides of the Lane-Emden equation gives, $-\alpha (1 -\alpha) A \xi^{-(2 +\alpha)} = - A^n \xi^{-\alpha n} . $ Hence, the power-law enthalpy distribution works as long as, $\alpha = \frac{2}{n-1} ~~~~~~\mathrm{and}~~~~~~ A = [\alpha (1 -\alpha)]^{1/(n-1)} = \biggl[ \frac{2(n-3)}{(n-1)^2} \biggr]^{1/(n-1)}.$ This means that hydrostatic balance can be established at all radial positions within a spherically symmetric configuration for power-law density distributions of the form, $\frac{\rho}{\rho_c} = \biggl[ \frac{2(n-3)}{(n-1)^2} \biggr]^{n/(n-1)} \xi^{- 2n/(n-1)}.$ (Note that, in this case, the subscript c should not represent the central conditions but, rather, conditions at some characteristic radial position within the configuration.) ## Examples It looks like the derived solution makes some physical sense only for polytropic indices $n > 3$. For $n=4$, the relevant power-law density distribution is, $\frac{\rho}{\rho_c} = \biggl[ \frac{2}{9} \biggr]^{4/3} \xi^{- 8/3}.$ For $n=(3+\epsilon)$ and $\epsilon \ll 1$, $\frac{\rho}{\rho_c} \approx \biggl[ \frac{\epsilon}{2} \biggr] \xi^{- 3}.$ For $n \gg 1$, $\frac{\rho}{\rho_c} \approx \biggl[ \frac{2}{n} \biggr] \xi^{- 2}.$ Hence, for polytropic indices in the range $\infty > n > 3$, the relevant power-law density distribution lies between $\rho \propto \xi^{-2}$ and $\rho \propto \xi^{-3}$. ## Isothermal Equation of State Suppose the gas is isothermal so that the relevant equation of state is, $P = c_s^2 \rho ,$ where $c_s$ is the sound speed. To determine what power-law density distribution will satisfy hydrostatic equilibrium in this case, it is better to return to the original statement of hydrostatic balance for spherically symmetric configurations, $\frac{1}{\rho} \frac{dP}{dr} = -\frac{d\Phi}{dr} .$ Plugging in the isothermal equation of state and assuming a radial density distribution of the form, $\rho(r) = \rho_0 \biggl( \frac{r}{r_0} \biggr)^{-\beta} ,$ we obtain, $\frac{d\Phi}{dr} = \beta \biggl(\frac{c_s^2}{r}\biggr) .$ Therefore, the Poisson equation gives, $\frac{1}{r^2}\frac{d}{dr}\biggl[r^2 \frac{d\Phi}{dr}\biggr] = \beta \biggl(\frac{c_s^2}{r^2}\biggr)= 4\pi G \rho_0 \biggl( \frac{r}{r_0} \biggr)^{-\beta} .$ This relation can be satisfied only if, $\beta = 2 ~~~~~\mathrm{and}~~~~~ \rho_0 = \frac{c_s^2}{2\pi G r_0^2} .$ Hence, hydrostatic balance can be achieved for an isothermal gas with a power-law density distribution of the form, $\rho \propto r^{-2}$. Because an isothermal $P(\rho)$ equation of state is obtained by setting $n = \infty$ in the more general polytropic equation of state, the result just derived is consistent with the above, more general analysis which showed that, for values of the polytropic index $n \gg 1$, the equilibrium power-law density distribution tends toward a $\rho \propto r^{-2}$ distribution. # Related Wikipedia Discussions © 2014 - 2021 by Joel E. Tohline |   H_Book Home   |   YouTube   | Appendices: | Equations | Variables | References | Ramblings | Images | myphys.lsu | ADS | Recommended citation:   Tohline, Joel E. (2021), The Structure, Stability, & Dynamics of Self-Gravitating Fluids, a (MediaWiki-based) Vistrails.org publication, https://www.vistrails.org/index.php/User:Tohline/citation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328851103782654, "perplexity": 1505.8026608288926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00077.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3890020
# Law Of Restitution And Momentum by ZxcvbnM2000 Tags: momentum, restitution P: 64 let's assume that there are two ball of identical mass and one of the two is stationary . and the collision is NOT head on . I know that the law of restitution always applies ONLY to the line of impact of two bodies .What about momentum , is it conserved in both the line of impact AND the line perpendicular to it ? I am asking because i feel like the law of restitution is similar to momentum conservation so i feel like applying it "twice". Is momentum conserved in all axes regardless of whether 0≤e≤1 ? Thank you ! Related Discussions General Physics 4 Classical Physics 2 Introductory Physics Homework 15 Introductory Physics Homework 0 General Physics 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142135381698608, "perplexity": 461.34758873607313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/176835
## High Time-Resolved Cardiac Functional Imaging Using Temporal Regularization for Small Animal on a Clinical 3T Scanner Accurate assessment of mice cardiac function with magnetic resonance imaging is essential for longitudinal studies and for drug development related to cardiovascular diseases. Whereas dedicated small animal MR scanners are not readily available, it would be a great advantage to be able to perform cardiac assessment on clinical systems, in particular, in the context of translational research. However, mouse imaging remains challenging since it requires both high spatial and temporal resolutions, while gradient performances of clinical scanners often limit the reachable parameters. In this study, we propose a new cine sequence, named "interleaved cine," which combines two repetitions of a standard cine sequence shifted in time in order to reach resolution parameters compatible with mice imaging. More precisely, this sequence allows temporal resolution to be reduced to 6.8 ms instead of 13.5 ms initially imposed by the system's hardware. We also propose a two-step denoising algorithm to suppress some artifacts inherent to the new interleaved cine thus allowing an efficient enhancement of the image quality. In particular, we model and suppress the periodic intensity pattern and further denoise the sequence by soft thresholding of the temporal Fourier coefficients. This sequence was successfully validated with mass and function measurements on relevant mice models of cardiovascular diseases. Published in: Ieee Transactions On Biomedical Engineering, 59, 929-935 Year: 2012 Keywords: Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8479310274124146, "perplexity": 1789.012060736037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00115.warc.gz"}
http://math.stackexchange.com/questions/370566/simple-question-on-factoring-derivatives-with-e
# Simple question on factoring derivatives with “e” I have a very simple factoring question; I'm doing a calculus problem in which part of the question requires me to factor a derivative. The derivative in question is $e^{-x}tx^{t-1}-e^{-x}x^t$ (the derivative of $\frac{x^t}{e^x})$. I have no problem with finding the derivative, and once the derivative is factored I can easily solve the problem, but I embarrassingly can't figure out how to factor the derivative by hand into the form $-e^{-x}x^{t-1}(x-t)$. I suspect my problem is that I'm running on rote muscle memory of factoring polynomials. I would appreciate a quick walk-through of the hand computations. - The formatting is a mess -what is $x(t)$ for example? –  Thomas Andrews Apr 23 '13 at 17:15 Presumably, you mean $-e^{-x}(x^{t-1})(x-t)$ in the third formula. –  Thomas Andrews Apr 23 '13 at 17:17 Starting with $e^{-x}tx^{t-1}-e^{-x}x^t$ you probably definitely proceeded to $e^{-x}(tx^{t-1}-x^t)$ and then maybe you are overlooking that $x^t=x\cdot x^{t-1}$ so that you recognize both terms hold a factor of $x^{t-1}$. I've seen this kind of blindness before when students struggle to factor things like $x^{1/2}+x^{3/2}$. They sometimes don't immediately see that $x^{1/2}$ is a common factor since $x^{3/2}=x\cdot x^{1/2}$. It's a good thing to be aware of! - Thank you, this was exactly what I was overlooking. Thanks also to @CameronWilliams for correcting the formatting. –  DeusExCinema Apr 24 '13 at 1:16 I prefer using product rule and so I'd rewrite it as $x^te^{-x}$. The derivative of this is $tx^{t-1}e^{-x}-x^te^{-x}$. Both terms have a common factor of $x^{t-1}e^{-x}$ so we can factor that out to get $x^{t-1}e^{-x}(t-x)$. This is as far as it can be factored without further complicating the expression. What seems to be giving you trouble later in the problem?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887275755405426, "perplexity": 308.9863790798912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999650773/warc/CC-MAIN-20140305060730-00045-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/basic-electricity-question.189042/
Basic Electricity question 1. Oct 4, 2007 JeeebeZ 1. The problem statement, all variables and given/known data 2 120 Volt bulbs (one 40 watt, one 100 watt) are connected to a variable power supply in series in a circuit. Which bulb will light first 2. The attempt at a solution So, I'm pretty sure its the 40 watt bulb because it uses less current. B(40) = 0.3333A B(100) = 0.8333A Is this correct or... am i just guessing to much? 2. Oct 4, 2007 pooface It seems logically correct. Yes. Bulbs can light before their maximum current is applied to them though, meaning the bulbs can be dim and not need to reach .333A to start the first one. 3. Oct 5, 2007 Red_Baron Red_Baron My initial response is that the current will always be the same everywhere at every instant in a non-reactive series circuit. Therefore you can expect a higher voltage drop across the higher resistance load, which in this case is the lower wattage lamp. I expect the lower wattage lamp to therefore glow well before the higher one shows any incandescence. Also when the voltage has been dialed up all the way, I expect the lower wattage lamp to still (hot cathode resistance at near design consideration) be way ahead of the higher wattage lamp by being near full brightness and a comparitively dim glow from the other. REM: the lower wattage, thus higher resistance lamp is the dominating, thus current controlling load, and it gets the lions share of the voltage. Also the hotter lamp's resistance will have increased more than the colder one, intensifying the difference between the individual voltage drops even more. Last edited: Oct 5, 2007 Similar Discussions: Basic Electricity question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390184640884399, "perplexity": 2222.7493671739194}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109682.23/warc/CC-MAIN-20170821232346-20170822012346-00198.warc.gz"}
https://www.physicsforums.com/threads/multi-angular-momenta.240194/
# Multi-angular momenta 1. Jun 13, 2008 ### RodB I have a Q re angular momentum (L) that's causing some heated discussion. A bullet is fired off center toward a rotatable target with an axis from, say, 100m (like a ballistic pendulum). The bullet has a fixed linear momentum and an L that remains constant as it travels toward the target -- Pxr stays constant as R and the angle change in sync. Now say the shooter's aim is off and the bullet will miss the target a bit. In this case the bullet has the same constant numerical value of linear momentum (with a small change in the vector due to the aim) and a similarly constant L, but this L is numerically different from the first. This means that the bullet can have an angular momentum with a different target separated by distance and angle from the first target (but all in the same fixed unmoving coordinate system), and that this L is different numerically from the first. The same bullet now has TWO different numerical angular momenta simultaneously. It then follows that the bullet has a simultaneous momenta with MANY (theoretically infinite) "targets". The angular momentum, in this scenario, is not a specific, single, explicit, inherent quality like linear momentum, but rather it is multiple, with each relative to some other object (axis) within the frame. Does anyone agree or disagree? (A rotating body's L is different and seems specific and inherent, but that's not the debate.) 2. Jun 13, 2008 ### rohanprabhu what you want to say is that the 'moment arm' stays constant i.e. the value $r\sin(\theta)$ stays constant. yes. you are right. But the way you state it is wrong. When we talk about angular momentum, or any moment, be it torque or moment of inertia, we need to specify about which axis we are calculating that moment. This is the reason the moment of Inertia for say, a cylinder is different along different axis. About a definite axis, the angular momentum is always unique and an inherent quality. However, you would also like to note that since a sphere is symmetric about it's center, for the sphere, the moment of inertia is the same about any axis passing through the center. Take the case of linear momentum. If an object having mass 'm' and velocity 'v' has momentum $p = mv$. Now, let us say you are moving at a velocity 'v', so, for you as an observer the velocity of the object now is 0 and hence the linear momentum $p = 0$. similarly, depending on the frame of your reference even the linear momentum has different values. There is no such thing as 'absolute momentum'. It is always relative to the frame of reference you are viewing it from and for all moments, it is relative to the axis you choose. 3. Jun 14, 2008 ### RodB Thanks, rohanprabhu So, in the same frame linear momentum is fixed (sans any force) and constant no mater how one looks at it. Angular momentum (again in a same frame) of any particular mass is relative to whatever axis one chooses to relate it to. It shouldn't have been that hard! Slap! Slap! As you point out (the obvious ) even singular rotating masses have different Ls depending on which axis one chooses.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939930260181427, "perplexity": 640.3535350215681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104565.76/warc/CC-MAIN-20170818043915-20170818063915-00261.warc.gz"}
http://mathhelpforum.com/calculus/176186-arc-length-spiral.html
# Math Help - arc Length of a spiral 1. ## arc Length of a spiral HI, I need to find the length of a spiral given by r(t) = pi*(t*cos(pi*t)i - t*sin(pi*t)j +t k) , from t=0 to t=3 I have been given the formula integral (sqrt(a^2 + b^2*t^2)dt = 1/2*t*sqrt(a^2 + b^2*t^2) + (a^2/2b)*ln(b*t + sqrt(a^2 + b^2*t^2). I proceeded to find dot product of tangent vectors r'(t) . r'(t) = (pi*cos(pi*t) - pi^2*tsin(pi*t))^2 + pi^2(pi*t*cos(pi*t) + sin(pi*t)) +pi^2 However, now i have come unstuck, where do i go from here? I was thinking about letting a^2 = (pi*cos(pi*t) - pi^2*tsin(pi*t))^2, b^2= (pi*t*cos(pi*t) + sin(pi*t)) +1, t^2=pi^2 then substituting into the formula and solving t=3 and t=0 But i really have no idea where to get the parameters a , b , t from to use in the formula Any help would be greatly appreciated. 2. Originally Posted by olski1 HI, I need to find the length of a spiral given by r(t) = pi*(t*cos(pi*t)i - t*sin(pi*t)j +t k) , from t=0 to t=3 I have been given the formula integral (sqrt(a^2 + b^2*t^2)dt = 1/2*t*sqrt(a^2 + b^2*t^2) + (a^2/2b)*ln(b*t + sqrt(a^2 + b^2*t^2). I proceeded to find dot product of tangent vectors r'(t) . r'(t) = (pi*cos(pi*t) - pi^2*tsin(pi*t))^2 + pi^2(pi*t*cos(pi*t) + sin(pi*t)) +pi^2 However, now i have come unstuck, where do i go from here? I was thinking about letting a^2 = (pi*cos(pi*t) - pi^2*tsin(pi*t))^2, b^2= (pi*t*cos(pi*t) + sin(pi*t)) +1, t^2=pi^2 then substituting into the formula and solving t=3 and t=0 But i really have no idea where to get the parameters a , b , t from to use in the formula Any help would be greatly appreciated. I think you need to check your derivative. I get that $\mathbf{r}'(t)=\pi[(\cos(\pi t)-t\pi \sin(\pi t))\mathbf{i}+(-\sin(\pi t)-t\pi \cos(\pi t))\mathbf{j}+\mathbf{k}]$ After dotting this with itself and simplifying I get $\mathbf{r}'(t) \cdot \mathbf{r}'(t)=\pi^2(\pi^2t^2+2)$ So the integral should be $\displaystyle \pi \int_{0}^{3}\sqrt{\pi^2t^2+2}$ Now use the substitution $\displaystyle t= \frac{\sqrt{2}}{\pi}\sinh(x)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331898093223572, "perplexity": 2086.425086014053}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430451452451.90/warc/CC-MAIN-20150501033732-00030-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/4024
## Files in this item FilesDescriptionFormat application/pdf ofor99-05.pdf (2MB) PaperPDF ## Description Title: A Nonparametric Analysis of the Forward Rate Volatilities Author(s): Pearson, Neil D.; Zhou, Anjun Subject(s): univariate model bivariate model Abstract: Heath, Jarrow, and Morton (1992) present a general framework for modeling the term structure of interest rates which nests most other models as special cases. In their framework, the dynamics of the term structure and the prices of derivative instruments depend only upon the initial term structure and the forward rate volatility functions. Despite their importance, there has been little empirical work studying the forward rate volatility functions. This paper begins to fill this gap by estimating some nonparametric models of the forward rate volatilities. In a univariate model, the form of the forward rate volatility function differs for different maturities, and for some maturities appears not to be a monotonic function of the level of the forward rate. In a bivariate model, a measure of the “slope” of the term structure seems to have an important impact on the volatility. These results differ from the simple models that have been proposed and used in the literature. Issue Date: 1999-10 Publisher: Office for Futures and Options Research, Department of Agricultural Economics, College of Agricultural, Consumer, and Environmental Sciences at the University of Illinois at Urbana-Champaign Series/Report: OFOR Working Paper Series, no. 99-05 Genre: Working / Discussion Paper Type: Text Language: English URI: http://hdl.handle.net/2142/4024 Publication Status: published or submitted for publication Peer Reviewed: not peer reviewed Date Available in IDEALS: 2008-03-17 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115874171257019, "perplexity": 1765.9594563641406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718311.12/warc/CC-MAIN-20161020183838-00360-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/frequency-of-light-question.35619/
# Frequency Of Light Question 1. Jul 18, 2004 ### zoobyshoe I am starting to slowly read QED, by Richard Feynman. He is quite adamant that light is particles, and not waves, and he cites the evidence of the action of the device called the photomultiplier to explain his stance. I find this argument completely convincing. However, looking at the photon as a particle causes me some confusion whenever he, or anyone, mentions the frequency of light. Viewed as a particle, I have no idea whatever what is occuring at a given frequency to say that it even has a frequency. Sometimes I wonder if this means that so many particles per unit time pass a given point. At other times I wonder if each individual photon possess its own frequency in some regard: is it spinning at a certain number of spins per unit time, or is it, maybe, undulating somehow a certain number of times per unit time? So, my question is "Does an individual photon have a frequency, and if so, what is it doing at that frequency?" -Zooby 2. Jul 18, 2004 ### jamie If you believe in string theory then a single photon will have frequency.but if you stick to classical physics you will will encounter the wave/particle duality. this says that light has both wave and particle properties and many experiments have reinforced this especially youngs double slit. regards jamie 3. Jul 18, 2004 ### Tyger The frequency is defined as the change of quantum mechanical phase per unit time. That is to say, the quantum mechanical phase associated with the amplitude for the state, photon, electron, etc.. Analogously the wavenumber is the QM phase change per unit length. The are associated with the classical quantities energy and momentum, which combine to make a fourvector, the fourmomentum or fourwavenumber. The factor of conversion is h, Planck's constant. It will get a good deal more interesting and complicated before you really begin to understand what it happening. The waves interfere with each other constructively and destructively in space and time, e.g. in the two slit experiment, producing effects which are not describable by classical mechanics. 4. Jul 18, 2004 ### speeding electron In 'traditional' quantum theory, if you like, i.e. before De Broglie, Schroedinger and wavefunctions etc., each photon has a frequency, which manifests itself in the energy of each photon. If the light has a higher frequency, it means that each photon is more energetic, -not- that more photons pass per unit time. This (more photons passing) means that the light has greater intensity. 5. Jul 18, 2004 ### rayjohn01 limited wave In my book on early QM they used the classical description of a wave bunch to get the idea over . If you take several sinewaves and add them together( especially ones which have harmonic frequency reations) the effect is to produce a wave form with a modulated amplitude which looks like bunches of waves ( the bunches are repeated). They then go on to imagine an infinite set of sines with a limited bandwidth , the results are that you end up with one wave bunch of a given length and shape. This bunch has an average frequency related to the group chosen. The maths is based on the fourier transform -- standard maths. This bunch however exhibits properties similar to the quantum mechanical concepts of property pairs momentum versus position etc and in fact can be used to deduce an approximation for h Planks constant. Although limited the model lets you ask questions such as -- how long is a photon-- it lets you picture more energy by showing how higher frequencies results in more waves in a bunch , it allows for self interference over some length range which lets you picture the double slit experiment ( however inaccurately). I am not sure if this analogy is still used, QM has come a long way, but it does help envisage a particle ( localised) with wave properties and the maths is real. 6. Jul 19, 2004 ### zoobyshoe Thanks everyone. These answers have been pretty helpful. I am clear now about the fact that the frequency of light has nothing to do with number of photons passing a given point per unit time, which is a good notion to be rid of, and that it does have to do with something the photon itself is doing at a certain frequency: changing its Quantum Mechanical Phase. The rest will take me some time to mull over and sort out. Thanks, -Zooby 7. Jul 19, 2004 Staff Emeritus Not phase but energy and momentum. For a massless particle in relativity they are numberically equal. The frequency is the momentum divided by Planck's constant. 8. Jul 19, 2004 ### dlgoff From your knowledge of bosonic string theory, does the string representing the photon operate in the curled up dimensions? It seems like maybe it only has to be in 4-d spacetime. Does the photon frequency (or momentum) directly relate to any string parameter? Don 9. Jul 20, 2004 Staff Emeritus The oscillations of the string are what (are supposed to) generate the particles we experience. The string oscillates in all the transverse dimensions (that is the dimensions not internal to its world-sheet). This includes the compacted ones. There are exceptions where the string has its endpoints on two different branes and the branes are in various configurations. 10. Jul 21, 2004 ### stewarta yes but what if it is all wrong... i know i know... but what if all photons from a standard light source weren't identical... what if they had a range of speeds and the result of viewing tham all was a white light. i think that if we measured light for specific speeds we would find slow light and fast light. i don't have much of a reference for this idea but my own, but it fits, and explains alot. especially when put to the test by the "red shift-blue shift" of a black hole. 11. Jul 21, 2004 ### zoobyshoe I would imagine that someone has done this. Since the invention of the laser it is possible to have light all of one "color" i.e. frequency to test. I haven't heard of it, but I would be surprised if someone hadn't already checked to see if there is a different speed noticable for different specific colors. It seems, too, pretty obvious from what has been pointed out in this this thread, that all photons are different from each other in many regards. (The single regard in which they aren't suspected of differing is the speed at which they all propagate.) Last edited: Jul 21, 2004 12. Jul 21, 2004 ### ZapperZ Staff Emeritus http://www.aip.org/enews/physnews/1999/split/pnu432-2.htm [Broken] [Exact citation: B. Schaefer, PRL v.82, p.4964 (1999).] To date, that is the most accurate measurement of light of different freq. This means that we have no experimental evidence using the best technique we have so far of validating your "what if's". Zz. Addendum: This just came out. A new expt. measurement of the speed of light at extremely low freq., in the range of 5 to 50 Hz![1] Again, no detectable deviation, and based on the accuracy and limitations of the measurement, this puts the upper limit on any possible photon rest mass at less than 4 x 10^-52 kg (!!). This number keeps getting smaller and smaller with each subsequent refinement and improved measurement. [1] M. Fullekrug, PRL v.93, p.043901 (2004). Last edited by a moderator: Apr 21, 2017 at 7:34 AM 13. Aug 16, 2004 ### zoobyshoe How, in practical terms, did Feynman figure out the beginning and ending direction of the stopwatch pointer? The concept is clear to me, but I don't see how anyone could accurately time a photon over such a short distance at such an enormously fast frequency. What is the trick to this? 14. Aug 16, 2004 ### zoobyshoe I am familiar with Fizeau and MM, but it occured to me that some different means of measuring the speed of light must be used for the very faint light from stars and events in far space. How do they gather such light, and measure its speed these days? 15. Aug 16, 2004 ### kawikdx225 I also started reading QED a few days ago and I think you misunderstand what he was doing with the arrows. He wasn't measuring the speed of light, he already knew that. And since he also knew the distance, he could calculate the time. That's how he knew how far the clock hand would move. Please correct me if I'm wrong, I'm just learning this stuff too. 16. Aug 16, 2004 ### kawikdx225 OOPS, I meant to quote post#13. I dont think the starting position of the clock is important as long as it's used for all arrows, it's the difference between arrows that gives the result. 17. Aug 16, 2004 ### zoobyshoe Please. I KNOW he's not measuring the speed of light. This is what I'm asking: how could he know the distance when a photon might take any possible path? Any photon that refelected and picked up by the photomultiplier may have reflected off the front or the rear surface. He implies that he is able to time the photons and derive the length of the path from the time. 18. Aug 16, 2004 ### kawikdx225 Oh sorry, we must be reading different books. 19. Aug 16, 2004 ### NEOclassic All photons of length <15 nm to >9000 nm are particles The range noted in the title dwarfs the visible range of ~ 400 nm to 900 nm. It should be remembered that, for hundreds of years and long before the velocity of light was measured, spectral emissions were known only by their lengths; only after "c" was known, was it possible to compute a pseudo-frequency: f = c/photon length. On the contrary, Faraday/Maxwell AC radio transmissions rightly have a reciprocal relation because they are truely electromagnetic waves. Incidentally, although RPF's "QED" includes a 40-page Chap 2 (Photons: Particles of Light); the second chapter of Vol I of his lectures, page 2-5, shows that light is listed with the acknowleged wave phenomena of Radio Radar etc - of course it should have been listed with his particles group that included UV-, x- and gamma- particles (i.e., not rays). Cheers, Jim 20. Aug 17, 2004 ### zoobyshoe What do you mean by the term "spectral emission"? What do you mean by "pseudo-frequency"? Are you saying the frequencies we ascribe to light are guesses or estimates, or are you saying the whole concept of light having a frequency is "pseudo"? What do you mean by "Faraday/Maxwell AC radio transmissions"? (It's the "radio" part I'm wondering about. Obviously those two didn't work with radio.) And a reciprocal relation to what? Yes the whole issue of EM waves vs photons(light) is one I was going to ask about specifically at some point. Maxwell, still aether-bound, asserted they were two related disturbances in the same medium, but not that they were exactly the same thing. Einstein, if I recall correctly, believed there was a kind of threshhold beyond which EM became photons, but I don't know the details. They both arise in the electric field, but I'm wondering how clear anyone is about the cut off, or threshold, between the two. I have been under the impression that as low as infra-red we already have photons. Similar Discussions: Frequency Of Light Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8400064706802368, "perplexity": 877.8870791853542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00429-ip-10-145-167-34.ec2.internal.warc.gz"}
https://onionesquereality.wordpress.com/tag/manifold-learning/
Feeds: Posts ## The Jacobian Inner Product This post may be considered an extension of the previous post. The setup and notation is the same as in the previous post (linked above). But to summarize: Earlier we had an unknown smooth regression function $f: \mathbb{R}^d \to \mathbb{R}$. The idea was to estimate at each training point, the gradient of this unknown function $f$, and then taking the sample expectation of the outerproduct of the gradient. This quantity has some interesting properties and applications. However it has its limitations, for one, the mapping $f: \mathbb{R}^d \to \mathbb{R}$ restricts the Gradient Outer Product being helpful for only regression and binary classification (since for binary classification the problem can be thought of as regression). It is not clear if a similar operator can be constructed when one is dealing with classification, that is the unknown smooth function is a vector valued function $f: \mathbb{R}^d \to \mathbb{R}^c$ where $c$ is the number of classes (let us say for the purpose of this discussion, that for each data point we have a probability distribution over the classes, a $c$ dimensional vector). In the case of the gradient outer product since we were working with a real valued function, it was possible to define the gradient at each point, which is simply: $\displaystyle \Bigg[ \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \dots, \frac{\partial f}{\partial x_d} \Bigg]$ For a vector valued function $f: \mathbb{R}^d \to \mathbb{R}^c$, we can’t have the gradient, but instead can define the Jacobian at each point: $\displaystyle \mathbf{J} = \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \dots & \frac{\partial f_1}{\partial x_d} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_c}{\partial x_1} & \frac{\partial f_c}{\partial x_2} & \dots & \frac{\partial f_c}{\partial x_d}\end{bmatrix}$ Note that $\mathbf{J}$ may be estimated in a similar manner as estimating gradients as in the previous posts. Which leads us to define the quantity $\mathbb{E}_X G(X) = \mathbb{E}_X ( \mathbf{J}^T \mathbf{J})$. The first thing to note is that $\mathbb{E}_X G(X) = \mathbb{E}_X ( \nabla f(X)\nabla f(X)^T)$ defined in the previous post is simply the quantity for the special case when $f: \mathbb{R}^d \to \mathbb{R}$. Another note is also in order: The reason why we suffixed that quantity with “outer product” (as opposed to “inner product” here) is simply because we considered the gradient to be a column vector, otherwise they are similar in spirit. Another thing to note is that it is easy to see that the quantity $\mathbb{E}_X G(X) = \mathbb{E}_X ( \mathbf{J}^T \mathbf{J})$ is a positive semi-definite matrix and hence is a Reimannian Metric, which is defined below: Definition: A Reimannian Metric $G$ on a manifold $\mathcal{M}$ is a symmetric and positive semi-definite matrix, which defines a smoothly varying inner product in the tangent space $\mathbf{T}_x \mathcal{M}$, for each point $x \in \mathcal{M}$ and $a, b \in \mathbf{T}_x \mathcal{M}$. This associated p.s.d matrix is called the metric tensor. In the above case, since $\mathbb{E}_X G(X) = \mathbb{E}_X ( \mathbf{J}^T \mathbf{J})$ is p.s.d it defines a Reimannian metric: $\langle a, b \rangle_x = a^T \mathbb{E}_X ( \mathbf{J}^T \mathbf{J}) b$ Thus, $\mathbb{E}_X ( \mathbf{J}^T \mathbf{J})$ is a specific metric (more general metrics are dealt with in areas such as metric learning). Properties: We saw some properties of $\mathbb{E}_X G(X) = \mathbb{E}_X ( \nabla f(X)\nabla f(X)^T)$ in the previous post. In the same vein, does $\mathbb{E}_X G(X) = \mathbb{E}_X ( \mathbf{J}^T \mathbf{J})$ have similar properties? i.e. does the first eigenvector also correspond to the direction of highest average variation? What about the $k$-dimensional subspace? What difference does it make that we are looking at a vector valued function? Also what about the cases when $d > c$ and otherwise? These are questions that I need to think about and should be the topic for a future post to be made soon, hopefully. Recently, in course of a project that I had some involvement in, I came across an interesting quadratic form. It is called in the literature as the Gradient Outer Product. This operator, which has applications in supervised dimensionality reduction, inverse regression and metric learning can be motivated in two (related) ways, but before doing so, the following is the set up: Setup: Suppose we have the usual set up as for nonparametric regression and (binary) classification i.e. let $Y \approx f(X)$ for some unknown smooth $f$, the input $X$ is $d$ dimensional $X = (X^i)_{i=1}^d$ 1. Supervised Dimensionality Reduction: It is often the case that $f$ varies most along only some relevant coordinates. This is the main motivation behind variable selection. The idea in variable selection is the following: That $f(X)$ may be written as $f(PX)$ where $P \in \{0,1\}^{k \times d}$. $P$ projects down the data to only $k$ relevant coordinates (i.e. some features are selected by $P$ while others are discarded). This idea is generalized in Multi-Index Regression, where the goal is to recover a subspace most relevant to prediction. That is, now suppose the data varies significantly along all coordinates but it still depends on some subspace of smaller dimensionality. This might be achieved by letting $P$ from the above to be $P \in \mathbb{R}^{k \times d}$. It is important to note that $P$ is not any subspace, but rather the $k$-dimensional subspace to which if the data is projected, the regression error would be the least. This idea might be further generalized by means of mapping $X$ to some $P$ non-linearly, but for now we only stick to the relevant subspace. How can we recover such a subspace? ________________ 2. Average Variation of $f$: Another way to motivate this quantity is the following: Suppose we want to find the direction in which $f$ varies the most on average, or the direction in which $f$ varies the second fastest on average and so on. Or more generally, given any direction, we want to find the variation of $f$ along it. How can we recover these? ________________ The Expected Gradient Outer Product:  The expected gradient outer product of the unknown classification or regression function is the quantity: $\mathbb{E}_X G(X) = \mathbb{E}_X ( \nabla f(X)\nabla f(X)^T)$ The expected gradient outer product recovers the average variation of $f$ in all directions. This can be seen as follows: The directional derivative at $x$ along $v \in \mathbb{R}^d$ is given by $\displaystyle {f'}_v(x) = \nabla f(x)^T v$ or $\mathbb{E}_X |{f'}_v(X)|^2 = \mathbb{E}_X (v^T G(X) v) = v^T (\mathbb{E}_X G(X))v$. From the above it follows that if $f$ does not vary along $v$ then $v$ must be in the null space of $\mathbb{E}_X (G(X))$. Infact it is not hard to show that the relevant subspace $P$ as defined earlier can also be recovered from $\mathbb{E}_X (G(X))$. This fact is given in the following lemma. Lemma: Under the assumed model i.e. $Y \approx f(PX)$, the gradient outer product matrix $\mathbb{E}_X (G(X))$ is of rank at most $k$. Let $\{v_1, v_2, \dots, v_k \}$ be the eigenvectors of $\mathbb{E}_X (G(X))$ corresponding to the top $k$ eigenvalues of $\mathbb{E}_X (G(X))$. Then the following is true: $span(P) = span(v_1, v_2, \dots, v_k)$ This means that a spectral decomposition of $\mathbb{E}_X (G(X))$ recovers the relevant subspace. Also note that the Gradient Outer Product corresponds to a kind of a supervised version of Principal Component Analysis. ________________ Estimation: Ofcourse in real settings the function is unknown and we are only given points sampled from it. There are various estimators for $\mathbb{E}_X (G(X))$, which usually involve estimation of the derivatives. In one of them the idea is to estimate, at each point $x$ a linear approximation to $f$. The slope of this approximation approximates the gradient at that point. Repeating this at the $n$ sample points, gives a sample gradient outer product. There is some work that shows that some of these estimators are statistically consistent. ________________ Related: Gradient Based Diffusion Maps: The gradient outer product can not isolate local information or geometry and its spectral decomposition, as seen above, gives only a linear embedding. One way to obtain a non-linear dimensionality reduction would be to borrow from and extend the idea of diffusion maps, which are well established tools in semi supervised learning. The central quantity of interest for diffusion maps is the graph laplacian $L = I - D^{-\frac{1}{2}} W D^{-\frac{1}{2}}$, where $D$ is the degree matrix and $W$ the adjacency matrix of the nearest neighbor graph constructed on the data points. The non linear embedding is obtained by a spectral decomposition of the operator $L$ or its powers $L^t$. As above, a similar diffusion operator may be constructed by using local gradient information. One such possible operator could be: $\displaystyle W_{ij} = W_{f}(x_i, x_j) = exp \Big( - \frac{ \| x_i - x_j \| ^2}{\sigma_1} - \frac{ | \frac{1}{2} (\nabla f(x_i) + \nabla f(x_j)) (x_i - x_j) |^2 }{\sigma_2}\Big)$ Note that the first term is the same that is used in unsupervised dimension reduction techniques such as laplacian eigenmaps and diffusion maps. The second term can be interpreted as a diffusion on function values. This operator gives a way for non linear supervised dimension reduction using gradient information. The above operator was defined here, however no consistency results for the same are provided. Also see: The Jacobian Inner Product. ________________
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 79, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721553325653076, "perplexity": 181.50280147839638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00164.warc.gz"}
https://tex.stackexchange.com/questions/287387/siunitx-the-detect-all-option-puts-celsius-in-math-mode
# siunitx: the detect-all option puts \celsius in math mode? [closed] Something is wrong with the Celsius unit and the detect-all option (see below). I finally removed the detect-all option to fix the problem, but still, there may be some bug here. MWE: \documentclass[11pt,a4paper,openright,twoside]{book} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{siunitx} \begin{document} \chapter{What is happening here?} Good: \SI{50}{\celsius}, \SI{30}{\watt\per\meter}.\\ \sisetup{detect-all} But after \verb=\sisetup{detect-all}=, the celsius unit (and only this one) is in math mode: \SI{50}{\celsius}, \SI{30}{\watt\per\meter}. \end{document} ## closed as off-topic by LaRiFaRi, Zarko, moewe, Svend Tveskæg, MenschMar 15 '16 at 14:53 • This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center. If this question can be reworded to fit the rules in the help center, please edit the question. • @LaRiFaRi I think I know what's going on here: trying to satisfy all requirements is hard! – Joseph Wright Jan 13 '16 at 9:06 • Bibi, no worries, Joseph did not say that you are asking for something special. I guess he was just a bit resigned as he was introducing some fix for some other issue just two weeks ago. As you are having the most recent TL-version, it seems as if you have found some more work for him... We'll see. – LaRiFaRi Jan 13 '16 at 9:17 • I'll need to think a bit to find a solution that works for all of the issues: may take a day or so. – Joseph Wright Jan 13 '16 at 9:30 • @JosephWright: I just noticed the problem is gone! Thanks! – Bibi Feb 2 '16 at 17:32 • I'm voting to close this question as off-topic because it is about a problem with has been fixed in the meantime. – LaRiFaRi Mar 15 '16 at 14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313774466514587, "perplexity": 1521.7720590234608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00039.warc.gz"}
https://meridian.allenpress.com/radiation-research/article-abstract/124/3/326/39009/Physical-Basis-for-Detection-of-DNA-Double-Strand?redirectedFrom=fulltext
Results using neutral filter elution are difficult to explain if this method detects only DNA double-strand breaks (DSBs). In an attempt to understand neutral filter elution, the size of DNA pieces eluted from filters was measured using pulsed-field gel electrophoresis. Contrary to expectation, the size of the pieces was independent of radiation dose and time of elution, and much smaller (∼460 kb) than anticipated based on the expected number of DSBs induced. Shearing of the DNA molecule, the presence of nonspecific nucleases, and the influence of DNA-associated proteins were examined but could not explain our results. Consequently, we propose that cell lysis causes swelling of the DNA gel, and the exposed fraction of DNA on the surface of the gel is then sheared as the elution solution flows through the filter. We suggest that the rate of DNA elution measured using neutral filter elution is dependent upon the number of DSBs present, the composition of the eluting solution, especially with regard to the presence of molecules which can influence chromatin swelling on the filter, and the conformation or "packaging" of DNA before lysis. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062755107879639, "perplexity": 2691.6009015077625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00540.warc.gz"}
https://www.physicsforums.com/threads/mesons-baryons-and-leptons.43665/
# Mesons, Baryons, and Leptons 1. Sep 19, 2004 ### KaneOris Does anyone know how many there are, also how many are just thoeretical. We know of the proton, neutron, and election, but do we know that Tau and Muons exist? Also does anyone think we'll keep finding more 2. Sep 19, 2004 ### misogynisticfeminist all particles in the standard model have been confirmed experimentally. Muons, taus etc. The only thing is the higgs and sparticles which are entirely theoretical. 3. Sep 19, 2004 ### marlon 4. Sep 19, 2004 ### mathman Most of these elementary particles were discovered before the quark model was created. I remember seeing a discusssion on PBS with Oppenheimer about all the various mesons and speculating about the existence of some underlying theory to explain what was then considered a mess. 5. Sep 30, 2004 ### Mk Yes, the muon and tau were both discovered before any quark theories
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709926247596741, "perplexity": 2030.1118545681695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541696.67/warc/CC-MAIN-20161202170901-00073-ip-10-31-129-80.ec2.internal.warc.gz"}
https://groupprops.subwiki.org/wiki/Sylow_not_implies_CDIN
# Sylow not implies CDIN This article gives the statement and possibly, proof, of a non-implication relation between two subgroup properties. That is, it states that every subgroup satisfying the first subgroup property (i.e., Sylow subgroup) need not satisfy the second subgroup property (i.e., CDIN-subgroup) View a complete list of subgroup property non-implications | View a complete list of subgroup property implications EXPLORE EXAMPLES YOURSELF: View examples of subgroups satisfying property Sylow subgroup but not CDIN-subgroup|View examples of subgroups satisfying property Sylow subgroup and CDIN-subgroup ## Statement We can have a group $G$ with a Sylow subgroup $P$ and elements $x,y \in P$ such that $x$ and $y$ are conjugate in $G$ but not in $N_G(P)$. In other words, $P$ is not a CDIN-subgroup. ## Related facts • Sylow and TI implies CDIN: If we assume the additional condition that the Sylow subgroup intersects all its distinct conjugates trivially, then any two elements of it that are conjugate in the whole group are, in fact, conjugate in its normalizer. ## Proof ### Example of the symmetric group of degree four Further information: symmetric group:S4, dihedral group:D8 Let $G$ be the symmetric group of degree four, say, on the set $\{ 1,2,3,4 \}$. Let $P$ be a $2$-Sylow subgroup of $G$, say: $P = \{ (), (1,2,3,4), (1,3)(2,4), (1,4,3,2), (1,3), (2,4), (1,2)(3,4), (1,3)(2,4) \}$. Note that $P$ is a $2$-Sylow subgroup of $G$, and is a dihedral group of order eight. Consider the elements of $P$ given by: $x = (1,3)(2,4), \qquad y = (1,2)(3,4)$. • $x$ and $y$ are conjugate in $G$: The element $(2,3)$, for instance, conjugates $x$ to $y$. • $P = N_G(P)$: Since $P$ has index three in $G$, either $N_G(P) = P$ or $N_G(P) = G$. But the element $(1,2)$, for instance, does not normalize $P$. This forces $N_G(P) = P$. • $x$ and $y$ are not conjugate in $P = N_G(P)$: In fact, $x$ is in the center of $P$, and $y$ is not in the center of $P$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963106632232666, "perplexity": 294.2140143327337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00356.warc.gz"}
http://codingforums.com/php/76447-internal-links.html?pda=1
Hello and welcome to our community! Is this your first visit? Enjoy an ad free experience by logging in. Not a member yet? Register. Will internal links work in php? like for an include? EG: PHP Code: ``` include("http://192.168.1.200/something.txt");  ``` I have like 4 servers on my network and they are all paths after a domain but they have names(server1, serv...) and it would be alot easier to type include("http://server3/file.daf"); then some long path. Also while writing this I came up with another question. Say I have 2 servers(server1, and server2) Server 1 is on my network but server 2 is remotly hosted. I have an include on server2 that points to a file on server 1. That file is include.php In that file are includes to a dir on server 1. My question is What is processed first? Will the dir be assumed to be on server1 or server2? Sorry if I lost ya in that last one, and its not that important anyway, i was just wondering... Thanks, ILLINI • include("http://192.168.1.200/something.txt"); would work assuming the computer @ .200 has a webserver running, note that you cant get the file source (of say php/perl etc) this way since the webserver will already have parsed them, to include php files for later processing you would want to access the network path \\network-server1\\files\\include.php includes will be relative to the calling script , so if blah.php is on server4 then the script will look for relative paths on server4 , not 2 or 1 or 3 etc. • ok thank you. I was just wondering, because I did it but have no way of testing it because the addresses would work for me anyway because they are on my network. • #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8119862675666809, "perplexity": 2553.179328249876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869404.51/warc/CC-MAIN-20150124161109-00017-ip-10-180-212-252.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/275855/explanation-of-two-integral-equations-and-implementation
# Explanation of two integral equations and implementation I have a problem with these two equations showing in the pictures. Equation 1: Equation 2: 1.I have two vectors represented the C(m) and S(m) in the two equations. I am trying to implement these equations in Matlab. Instead of doing a continuous integral operation, I think I should do the summation. For example, the first equation A1 = sqrt(sum(C.^2)); Am I right? Also, I am not sure how to implement equation two that contains a ||dM||. Please help. 2.What are the mathematical meaning of these two equations? I think the first one may related to the 'sum of squares', if C(m) is a vector then this equation will measure the total variance of the random variable in vector C or some kind of average of vector C? What about the second one? Thanks very much for your help! A. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959368884563446, "perplexity": 416.111089030614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117783.16/warc/CC-MAIN-20160428161517-00023-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/68170-order-group.html
# Math Help - order of the group 1. ## order of the group Let G be a group and x belongs to G, Define ord(x) = min{r >= 1 : x^r = 1} If f: G map to H is an injective group homomorphism. Show that, for each x in G, ord(f(x)) = ord(x). I think I come up with the approach is that , I need to show that ord(f(x)) divides r and hence the result is proved. Or: I can somehow show that ord(f(x)) = min{ r >= 1 : (f(x))^r = 1} which is by the definition. Can you tell me whether the above approaches are right or wrong, if they are wrong then show me how to solve this question please? 2. Originally Posted by knguyen2005 Let G be a group and x belongs to G, Define ord(x) = min{r >= 1 : x^r = 1} If f: G map to H is an injective group homomorphism. Show that, for each x in G, ord(f(x)) = ord(x). I think I come up with the approach is that , I need to show that ord(f(x)) divides r and hence the result is proved. Or: I can somehow show that ord(f(x)) = min{ r >= 1 : (f(x))^r = 1} which is by the definition. Can you tell me whether the above approaches are right or wrong, if they are wrong then show me how to solve this question please? Let $\phi: G\to H$ be an injective homomorphism. And let $x\in G$ (assuming it has finite order) have order $k$. Thus, $x^k = e$ but then $\phi(x)^k = \phi(x^k) = e'$. Say that there is $j with $\phi^(x)^j = e'$ but then $\phi (x^j) = e' \implies x^j = e$. But this is contradition because $j and so $k$ is order of $\phi(x)$. 3. [Thus, $x^k = e$ but then $\phi(x)^k = \phi(x^k) = e'$. Say that there is $j with $\phi^(x)^j = e'$ but then $\phi (x^j) = e' \implies x^j = e$. I am not sure I understand, how come you got: $\phi(x)^k = \phi(x^k) = e'$. And finally when you concluded that $\phi (x^j) = e' \implies x^j = e$. How can you imply this result? Thank you again for your precious time 4. I am not sure I understand, how come you got: $\phi(x)^k = \phi(x^k) = e'$ $\phi(x^k) = \phi(x) ^k$ because $\phi$ is a homomorphism. Since $x^k = e$ and $\phi$ is a group homomorphism, which means it maps identity to identity. Thus $\phi(x)^k = \phi(x^k) = e'$ . And finally when you concluded that $\phi (x^j) = e' \implies x^j = e$. How can you imply this result? He assumed that \phi is injective (or one-one). And $\phi (x^j) = e' = \phi(e)$, thus injectivity forces $\phi (x^j) = \phi(e) \implies x^j = e$. 5. I am not sure I understand, how come you got: X Є G O(G)=n X^n =e f(X^n)=f(e) f(X.X.X.X.X….X)=e’ f(X). f(X) . f(X) . f(X) . f(X)…… . f(X)=e’ (f(X))^n=e’ (f(X))^n=f(X^n)=e’ O(f(X))=n O(H)=n O(H)=O(G) 6. Thanks very much you guys. Now I understand it now. It makes sense to me due to your clear explanation. Cheers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950710654258728, "perplexity": 1944.1174318770857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654872/warc/CC-MAIN-20140305060734-00023-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-7th-edition/chapter-3-section-3-2-radians-and-degrees-3-2-problem-set-page-134/94
## Trigonometry 7th Edition $({\frac{\pi}{6},5}),({{\frac{\pi}{3}},{\frac{5}{2}})},({\frac{2\pi}{3},-5}),({{\pi},{\frac{5}{2}}}),({\frac{7\pi}{6},5})$ given $y=5 \cos(2x-\frac{\pi}{3})$ when $x={\frac{\pi}{6}}$ then $y=5\cos{0}=5$ $x={\frac{\pi}{3}}$ then $y=5\cos{\frac{\pi}{3}}={\frac{5}{2}}$ $x={\frac{2\pi}{3}}$ then $y=5\cos{\pi}=-5$ $x={{\pi}}$ then $y=5\cos{\frac{5\pi}{3}}={\frac{5}{2}}$ $x={\frac{7\pi}{6}}$ then $y=3.\cos{2\pi}=5$ then the final answer is : $({\frac{\pi}{6},5}),({{\frac{\pi}{3}},{\frac{5}{2}})},({\frac{2\pi}{3},-5}),({{\pi},{\frac{5}{2}}}),({\frac{7\pi}{6},5})$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999678373336792, "perplexity": 135.1046008612994}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00187.warc.gz"}
https://dealii.org/developer/doxygen/deal.II/step_23.html
Reference documentation for deal.II version GIT 91e6c87029 2023-02-08 03:05:02+00:00 The step-23 tutorial program This tutorial depends on step-4. 1. Introduction 2. The commented program 1. Results 2. The plain program # Introduction Note The material presented here is also discussed in video lecture 28. (All video lectures are also available here.) This is the first of a number of tutorial programs that will finally cover "real" time-dependent problems, not the slightly odd form of time dependence found in step-18 or the DAE model of step-21. In particular, this program introduces the wave equation in a bounded domain. Later, step-24 will consider an example of absorbing boundary conditions, and step-25 a kind of nonlinear wave equation producing solutions called solitons. The wave equation in its prototypical form reads as follows: find $$u(x,t), x\in\Omega, t\in[0,T]$$ that satisfies \begin{eqnarray*} \frac{\partial^2 u}{\partial t^2} - \Delta u &=& f \qquad \textrm{in}\ \Omega\times [0,T], \\ u(x,t) &=& g \qquad \textrm{on}\ \partial\Omega\times [0,T], \\ u(x,0) &=& u_0(x) \qquad \textrm{in}\ \Omega, \\ \frac{\partial u(x,0)}{\partial t} &=& u_1(x) \qquad \textrm{in}\ \Omega. \end{eqnarray*} Note that since this is an equation with second-order time derivatives, we need to pose two initial conditions, one for the value and one for the time derivative of the solution. Physically, the equation describes the motion of an elastic medium. In 2-d, one can think of how a membrane moves if subjected to a force. The Dirichlet boundary conditions above indicate that the membrane is clamped at the boundary at a height $$g(x,t)$$ (this height might be moving as well — think of people holding a blanket and shaking it up and down). The first initial condition equals the initial deflection of the membrane, whereas the second one gives its velocity. For example, one could think of pushing the membrane down with a finger and then letting it go at $$t=0$$ (nonzero deflection but zero initial velocity), or hitting it with a hammer at $$t=0$$ (zero deflection but nonzero velocity). Both cases would induce motion in the membrane. ### Time discretization #### Method of lines or Rothe's method? There is a long-standing debate in the numerical analysis community over whether a discretization of time dependent equations should involve first discretizing the time variable leading to a stationary PDE at each time step that is then solved using standard finite element techniques (this is called the Rothe method), or whether one should first discretize the spatial variables, leading to a large system of ordinary differential equations that can then be handled by one of the usual ODE solvers (this is called the method of lines). Both of these methods have advantages and disadvantages. Traditionally, people have preferred the method of lines, since it allows to use the very well developed machinery of high-order ODE solvers available for the rather stiff ODEs resulting from this approach, including step length control and estimation of the temporal error. On the other hand, Rothe's method becomes awkward when using higher-order time stepping method, since one then has to write down a PDE that couples the solution of the present time step not only with that at the previous time step, but possibly also even earlier solutions, leading to a significant number of terms. For these reasons, the method of lines was the method of choice for a long time. However, it has one big drawback: if we discretize the spatial variable first, leading to a large ODE system, we have to choose a mesh once and for all. If we are willing to do this, then this is a legitimate and probably superior approach. If, on the other hand, we are looking at the wave equation and many other time dependent problems, we find that the character of a solution changes as time progresses. For example, for the wave equation, we may have a single wave travelling through the domain, where the solution is smooth or even constant in front of and behind the wave — adaptivity would be really useful for such cases, but the key is that the area where we need to refine the mesh changes from time step to time step! If we intend to go that way, i.e. choose a different mesh for each time step (or set of time steps), then the method of lines is not appropriate any more: instead of getting one ODE system with a number of variables equal to the number of unknowns in the finite element mesh, our number of unknowns now changes all the time, a fact that standard ODE solvers are certainly not prepared to deal with at all. On the other hand, for the Rothe method, we just get a PDE for each time step that we may choose to discretize independently of the mesh used for the previous time step; this approach is not without perils and difficulties, but at least is a sensible and well-defined procedure. For all these reasons, for the present program, we choose to use the Rothe method for discretization, i.e. we first discretize in time and then in space. We will not actually use adaptive meshes at all, since this involves a large amount of additional code, but we will comment on this some more in the results section below. #### Rothe's method! Given these considerations, here is how we will proceed: let us first define a simple time stepping method for this second order problem, and then in a second step do the spatial discretization, i.e. we will follow Rothe's approach. For the first step, let us take a little detour first: in order to discretize a second time derivative, we can either discretize it directly, or we can introduce an additional variable and transform the system into a first order system. In many cases, this turns out to be equivalent, but dealing with first order systems is often simpler. To this end, let us introduce $v = \frac{\partial u}{\partial t},$ and call this variable the velocity for obvious reasons. We can then reformulate the original wave equation as follows: \begin{eqnarray*} \frac{\partial u}{\partial t} - v &=& 0 \qquad \textrm{in}\ \Omega\times [0,T], \\ \frac{\partial v}{\partial t} - \Delta u &=& f \qquad \textrm{in}\ \Omega\times [0,T], \\ u(x,t) &=& g \qquad \textrm{on}\ \partial\Omega\times [0,T], \\ u(x,0) &=& u_0(x) \qquad \textrm{in}\ \Omega, \\ v(x,0) &=& u_1(x) \qquad \textrm{in}\ \Omega. \end{eqnarray*} The advantage of this formulation is that it now only contains first time derivatives for both variables, for which it is simple to write down time stepping schemes. Note that we do not have boundary conditions for $$v$$ at first. However, we could enforce $$v=\frac{\partial g}{\partial t}$$ on the boundary. It turns out in numerical examples that this is actually necessary: without doing so the solution doesn't look particularly wrong, but the Crank-Nicolson scheme does not conserve energy if one doesn't enforce these boundary conditions. With this formulation, let us introduce the following time discretization where a superscript $$n$$ indicates the number of a time step and $$k=t_n-t_{n-1}$$ is the length of the present time step: \begin{eqnarray*} \frac{u^n - u^{n-1}}{k} - \left[\theta v^n + (1-\theta) v^{n-1}\right] &=& 0, \\ \frac{v^n - v^{n-1}}{k} - \Delta\left[\theta u^n + (1-\theta) u^{n-1}\right] &=& \theta f^n + (1-\theta) f^{n-1}. \end{eqnarray*} Note how we introduced a parameter $$\theta$$ here. If we chose $$\theta=0$$, for example, the first equation would reduce to $$\frac{u^n - u^{n-1}}{k} - v^{n-1} = 0$$, which is well-known as the forward or explicit Euler method. On the other hand, if we set $$\theta=1$$, then we would get $$\frac{u^n - u^{n-1}}{k} - v^n = 0$$, which corresponds to the backward or implicit Euler method. Both these methods are first order accurate methods. They are simple to implement, but they are not really very accurate. The third case would be to choose $$\theta=\frac 12$$. The first of the equations above would then read $$\frac{u^n - u^{n-1}}{k} - \frac 12 \left[v^n + v^{n-1}\right] = 0$$. This method is known as the Crank-Nicolson method and has the advantage that it is second order accurate. In addition, it has the nice property that it preserves the energy in the solution (physically, the energy is the sum of the kinetic energy of the particles in the membrane plus the potential energy present due to the fact that it is locally stretched; this quantity is a conserved one in the continuous equation, but most time stepping schemes do not conserve it after time discretization). Since $$v^n$$ also appears in the equation for $$u^n$$, the Crank-Nicolson scheme is also implicit. In the program, we will leave $$\theta$$ as a parameter, so that it will be easy to play with it. The results section will show some numerical evidence comparing the different schemes. The equations above (called the semidiscretized equations because we have only discretized the time, but not space), can be simplified a bit by eliminating $$v^n$$ from the first equation and rearranging terms. We then get \begin{eqnarray*} \left[ 1-k^2\theta^2\Delta \right] u^n &=& \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} + k v^{n-1} + k^2\theta\left[\theta f^n + (1-\theta) f^{n-1}\right],\\ v^n &=& v^{n-1} + k\Delta\left[ \theta u^n + (1-\theta) u^{n-1}\right] + k\left[\theta f^n + (1-\theta) f^{n-1}\right]. \end{eqnarray*} In this form, we see that if we are given the solution $$u^{n-1},v^{n-1}$$ of the previous timestep, that we can then solve for the variables $$u^n,v^n$$ separately, i.e. one at a time. This is convenient. In addition, we recognize that the operator in the first equation is positive definite, and the second equation looks particularly simple. ### Space discretization We have now derived equations that relate the approximate (semi-discrete) solution $$u^n(x)$$ and its time derivative $$v^n(x)$$ at time $$t_n$$ with the solutions $$u^{n-1}(x),v^{n-1}(x)$$ of the previous time step at $$t_{n-1}$$. The next step is to also discretize the spatial variable using the usual finite element methodology. To this end, we multiply each equation with a test function, integrate over the entire domain, and integrate by parts where necessary. This leads to \begin{eqnarray*} (u^n,\varphi) + k^2\theta^2(\nabla u^n,\nabla \varphi) &=& (u^{n-1},\varphi) - k^2\theta(1-\theta)(\nabla u^{n-1},\nabla \varphi) + k(v^{n-1},\varphi) + k^2\theta \left[ \theta (f^n,\varphi) + (1-\theta) (f^{n-1},\varphi) \right], \\ (v^n,\varphi) &=& (v^{n-1},\varphi) - k\left[ \theta (\nabla u^n,\nabla\varphi) + (1-\theta) (\nabla u^{n-1},\nabla \varphi)\right] + k \left[ \theta (f^n,\varphi) + (1-\theta) (f^{n-1},\varphi) \right]. \end{eqnarray*} It is then customary to approximate $$u^n(x) \approx u^n_h(x) = \sum_i U_i^n\phi_i^n(x)$$, where $$\phi_i^n(x)$$ are the shape functions used for the discretization of the $$n$$-th time step and $$U_i^n$$ are the unknown nodal values of the solution. Similarly, $$v^n(x) \approx v^n_h(x) = \sum_i V_i^n\phi_i^n(x)$$. Finally, we have the solutions of the previous time step, $$u^{n-1}(x) \approx u^{n-1}_h(x) = \sum_i U_i^{n-1}\phi_i^{n-1}(x)$$ and $$v^{n-1}(x) \approx v^{n-1}_h(x) = \sum_i V_i^{n-1}\phi_i^{n-1}(x)$$. Note that since the solution of the previous time step has already been computed by the time we get to time step $$n$$, $$U^{n-1},V^{n-1}$$ are known. Furthermore, note that the solutions of the previous step may have been computed on a different mesh, so we have to use shape functions $$\phi^{n-1}_i(x)$$. If we plug these expansions into above equations and test with the test functions from the present mesh, we get the following linear system: \begin{eqnarray*} (M^n + k^2\theta^2 A^n)U^n &=& M^{n,n-1}U^{n-1} - k^2\theta(1-\theta) A^{n,n-1}U^{n-1} + kM^{n,n-1}V^{n-1} + k^2\theta \left[ \theta F^n + (1-\theta) F^{n-1} \right], \\ M^nV^n &=& M^{n,n-1}V^{n-1} - k\left[ \theta A^n U^n + (1-\theta) A^{n,n-1} U^{n-1}\right] + k \left[ \theta F^n + (1-\theta) F^{n-1} \right], \end{eqnarray*} where \begin{eqnarray*} M^n_{ij} &=& (\phi_i^n, \phi_j^n), \\ A^n_{ij} &=& (\nabla\phi_i^n, \nabla\phi_j^n), \\ M^{n,n-1}_{ij} &=& (\phi_i^n, \phi_j^{n-1}), \\ A^{n,n-1}_{ij} &=& (\nabla\phi_i^n, \nabla\phi_j^{n-1}), \\ F^n_{i} &=& (f^n,\phi_i^n), \\ F^{n-1}_{i} &=& (f^{n-1},\phi_i^n). \end{eqnarray*} If we solve these two equations, we can move the solution one step forward and go on to the next time step. It is worth noting that if we choose the same mesh on each time step (as we will in fact do in the program below), then we have the same shape functions on time step $$n$$ and $$n-1$$, i.e. $$\phi^n_i=\phi_i^{n-1}=\phi_i$$. Consequently, we get $$M^n=M^{n,n-1}=M$$ and $$A^n=A^{n,n-1}=A$$. On the other hand, if we had used different shape functions, then we would have to compute integrals that contain shape functions defined on two meshes. This is a somewhat messy process that we omit here, but that is treated in some detail in step-28. Under these conditions (i.e. a mesh that doesn't change), one can optimize the solution procedure a bit by basically eliminating the solution of the second linear system. We will discuss this in the introduction of the step-25 program. ### Energy conservation One way to compare the quality of a time stepping scheme is to see whether the numerical approximation preserves conservation properties of the continuous equation. For the wave equation, the natural quantity to look at is the energy. By multiplying the wave equation by $$u_t$$, integrating over $$\Omega$$, and integrating by parts where necessary, we find that $\frac{d}{d t} \left[\frac 12 \int_\Omega \left(\frac{\partial u}{\partial t}\right)^2 + (\nabla u)^2 \; dx\right] = \int_\Omega f \frac{\partial u}{\partial t} \; dx + \int_{\partial\Omega} n\cdot\nabla u \frac{\partial g}{\partial t} \; dx.$ By consequence, in absence of body forces and constant boundary values, we get that $E(t) = \frac 12 \int_\Omega \left(\frac{\partial u}{\partial t}\right)^2 + (\nabla u)^2 \; dx$ is a conserved quantity, i.e. one that doesn't change with time. We will compute this quantity after each time step. It is straightforward to see that if we replace $$u$$ by its finite element approximation, and $$\frac{\partial u}{\partial t}$$ by the finite element approximation of the velocity $$v$$, then $E(t_n) = \frac 12 \left<V^n, M^n V^n\right> + \frac 12 \left<U^n, A^n U^n\right>.$ As we will see in the results section, the Crank-Nicolson scheme does indeed conserve the energy, whereas neither the forward nor the backward Euler scheme do. ### Who are Courant, Friedrichs, and Lewy? One of the reasons why the wave equation is not easy to solve numerically is that explicit time discretizations are only stable if the time step is small enough. In particular, it is coupled to the spatial mesh width $$h$$. For the lowest order discretization we use here, the relationship reads $k\le \frac hc$ where $$c$$ is the wave speed, which in our formulation of the wave equation has been normalized to one. Consequently, unless we use the implicit schemes with $$\theta>0$$, our solutions will not be numerically stable if we violate this restriction. Implicit schemes do not have this restriction for stability, but they become inaccurate if the time step is too large. This condition was first recognized by Courant, Friedrichs, and Lewy — in 1928, long before computers became available for numerical computations! (This result appeared in the German language article R. Courant, K. Friedrichs and H. Lewy: Über die partiellen Differenzengleichungen der mathematischen Physik, Mathematische Annalen, vol. 100, no. 1, pages 32-74, 1928.) This condition on the time step is most frequently just referred to as the CFL condition. Intuitively, the CFL condition says that the time step must not be larger than the time it takes a wave to cross a single cell. In the program, we will refine the square $$[-1,1]^2$$ seven times uniformly, giving a mesh size of $$h=\frac 1{64}$$, which is what we set the time step to. The fact that we set the time step and mesh size individually in two different places is error prone: it is too easy to refine the mesh once more but forget to also adjust the time step. step-24 shows a better way how to keep these things in sync. ### The test case Although the program has all the hooks to deal with nonzero initial and boundary conditions and body forces, we take a simple case where the domain is a square $$[-1,1]^2$$ and \begin{eqnarray*} f &=& 0, \\ u_0 &=& 0, \\ u_1 &=& 0, \\ g &=& \left\{\begin{matrix}\sin (4\pi t) &\qquad& \text{for }\ t\le \frac 12, x=-1, -\frac 13<y<\frac 13 \\ 0 &&\text{otherwise} \end{matrix} \right. \end{eqnarray*} This corresponds to a membrane initially at rest and clamped all around, where someone is waving a part of the clamped boundary once up and down, thereby shooting a wave into the domain. # The commented program ### Include files We start with the usual assortment of include files that we've seen in so many of the previous tests: Here are the only three include files of some new interest: The first one is already used, for example, for the VectorTools::interpolate_boundary_values and MatrixTools::apply_boundary_values functions. However, we here use another function in that class, VectorTools::project to compute our initial values as the $$L^2$$ projection of the continuous initial values. Furthermore, we use VectorTools::create_right_hand_side to generate the integrals $$(f^n,\phi^n_i)$$. These were previously always generated by hand in assemble_system or similar functions in application code. However, we're too lazy to do that here, so simply use a library function: In a very similar vein, we are also too lazy to write the code to assemble mass and Laplace matrices, although it would have only taken copying the relevant code from any number of previous tutorial programs. Rather, we want to focus on the things that are truly new to this program and therefore use the MatrixCreator::create_mass_matrix and MatrixCreator::create_laplace_matrix functions. They are declared here: Finally, here is an include file that contains all sorts of tool functions that one sometimes needs. In particular, we need the Utilities::int_to_string class that, given an integer argument, returns a string representation of it. It is particularly useful since it allows for a second parameter indicating the number of digits to which we want the result padded with leading zeros. We will use this to write output files that have the form solution-XXX.vtu where XXX denotes the number of the time step and always consists of three digits even if we are still in the single or double digit time steps. The last step is as in all previous programs: namespace Step23 { using namespace dealii; ### The WaveEquation class Next comes the declaration of the main class. It's public interface of functions is like in most of the other tutorial programs. Worth mentioning is that we now have to store four matrices instead of one: the mass matrix $$M$$, the Laplace matrix $$A$$, the matrix $$M+k^2\theta^2A$$ used for solving for $$U^n$$, and a copy of the mass matrix with boundary conditions applied used for solving for $$V^n$$. Note that it is a bit wasteful to have an additional copy of the mass matrix around. We will discuss strategies for how to avoid this in the section on possible improvements. Likewise, we need solution vectors for $$U^n,V^n$$ as well as for the corresponding vectors at the previous time step, $$U^{n-1},V^{n-1}$$. The system_rhs will be used for whatever right hand side vector we have when solving one of the two linear systems in each time step. These will be solved in the two functions solve_u and solve_v. Finally, the variable theta is used to indicate the parameter $$\theta$$ that is used to define which time stepping scheme to use, as explained in the introduction. The rest is self-explanatory. template <int dim> class WaveEquation { public: WaveEquation(); void run(); private: void setup_system(); void solve_u(); void solve_v(); void output_results() const; DoFHandler<dim> dof_handler; SparsityPattern sparsity_pattern; SparseMatrix<double> laplace_matrix; Vector<double> solution_u, solution_v; Vector<double> old_solution_u, old_solution_v; Vector<double> system_rhs; double time_step; double time; unsigned int timestep_number; const double theta; }; Definition: fe_q.h:551 void mass_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const double factor=1.) Definition: l2.h:58 void run(const Iterator &begin, const typename identity< Iterator >::type &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size) Definition: work_stream.h:474 const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation ### Equation data Before we go on filling in the details of the main class, let us define the equation data corresponding to the problem, i.e. initial and boundary values for both the solution $$u$$ and its time derivative $$v$$, as well as a right hand side class. We do so using classes derived from the Function class template that has been used many times before, so the following should not be a surprise. Let's start with initial values and choose zero for both the value $$u$$ as well as its time derivative, the velocity $$v$$: template <int dim> class InitialValuesU : public Function<dim> { public: virtual double value(const Point<dim> & /*p*/, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); return 0; } }; template <int dim> class InitialValuesV : public Function<dim> { public: virtual double value(const Point<dim> & /*p*/, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); return 0; } }; virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const Definition: point.h:111 #define Assert(cond, exc) Definition: exceptions.h:1583 static ::ExceptionBase & ExcIndexRange(std::size_t arg1, std::size_t arg2, std::size_t arg3) Secondly, we have the right hand side forcing term. Boring as we are, we choose zero here as well: template <int dim> class RightHandSide : public Function<dim> { public: virtual double value(const Point<dim> & /*p*/, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); return 0; } }; Finally, we have boundary values for $$u$$ and $$v$$. They are as described in the introduction, one being the time derivative of the other: template <int dim> class BoundaryValuesU : public Function<dim> { public: virtual double value(const Point<dim> & p, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); if ((this->get_time() <= 0.5) && (p[0] < 0) && (p[1] < 1. / 3) && (p[1] > -1. / 3)) return std::sin(this->get_time() * 4 * numbers::PI); else return 0; } }; template <int dim> class BoundaryValuesV : public Function<dim> { public: virtual double value(const Point<dim> & p, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); if ((this->get_time() <= 0.5) && (p[0] < 0) && (p[1] < 1. / 3) && (p[1] > -1. / 3)) return (std::cos(this->get_time() * 4 * numbers::PI) * 4 * numbers::PI); else return 0; } }; numbers::NumberTraits< double >::real_type get_time() const static constexpr double PI Definition: numbers.h:248 ::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &) ### Implementation of the WaveEquation class The implementation of the actual logic is actually fairly short, since we relegate things like assembling the matrices and right hand side vectors to the library. The rest boils down to not much more than 130 lines of actual code, a significant fraction of which is boilerplate code that can be taken from previous example programs (e.g. the functions that solve linear systems, or that generate output). Let's start with the constructor (for an explanation of the choice of time step, see the section on Courant, Friedrichs, and Lewy in the introduction): template <int dim> WaveEquation<dim>::WaveEquation() : fe(1) , dof_handler(triangulation) , time_step(1. / 64) , time(time_step) , timestep_number(1) , theta(0.5) {} #### WaveEquation::setup_system The next function is the one that sets up the mesh, DoFHandler, and matrices and vectors at the beginning of the program, i.e. before the first time step. The first few lines are pretty much standard if you've read through the tutorial programs at least up to step-6: template <int dim> void WaveEquation<dim>::setup_system() { triangulation.refine_global(7); std::cout << "Number of active cells: " << triangulation.n_active_cells() << std::endl; dof_handler.distribute_dofs(fe); std::cout << "Number of degrees of freedom: " << dof_handler.n_dofs() << std::endl << std::endl; DynamicSparsityPattern dsp(dof_handler.n_dofs(), dof_handler.n_dofs()); sparsity_pattern.copy_from(dsp); void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id) void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false) Then comes a block where we have to initialize the 3 matrices we need in the course of the program: the mass matrix, the Laplace matrix, and the matrix $$M+k^2\theta^2A$$ used when solving for $$U^n$$ in each time step. When setting up these matrices, note that they all make use of the same sparsity pattern object. Finally, the reason why matrices and sparsity patterns are separate objects in deal.II (unlike in many other finite element or linear algebra classes) becomes clear: in a significant fraction of applications, one has to hold several matrices that happen to have the same sparsity pattern, and there is no reason for them not to share this information, rather than re-building and wasting memory on it several times. After initializing all of these matrices, we call library functions that build the Laplace and mass matrices. All they need is a DoFHandler object and a quadrature formula object that is to be used for numerical integration. Note that in many respects these functions are better than what we would usually do in application programs, for example because they automatically parallelize building the matrices if multiple processors are available in a machine: for more information see the documentation of WorkStream or the Parallel computing with multiple processors module. The matrices for solving linear systems will be filled in the run() method because we need to re-apply boundary conditions every time step. mass_matrix.reinit(sparsity_pattern); laplace_matrix.reinit(sparsity_pattern); matrix_u.reinit(sparsity_pattern); matrix_v.reinit(sparsity_pattern); QGauss<dim>(fe.degree + 1), QGauss<dim>(fe.degree + 1), laplace_matrix); void create_mass_matrix(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim > &q, SparseMatrixType &matrix, const Function< spacedim, typename SparseMatrixType::value_type > *const a=nullptr, const AffineConstraints< typename SparseMatrixType::value_type > &constraints=AffineConstraints< typename SparseMatrixType::value_type >()) void create_laplace_matrix(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim > &q, SparseMatrixType &matrix, const Function< spacedim, typename SparseMatrixType::value_type > *const a=nullptr, const AffineConstraints< typename SparseMatrixType::value_type > &constraints=AffineConstraints< typename SparseMatrixType::value_type >()) The rest of the function is spent on setting vector sizes to the correct value. The final line closes the hanging node constraints object. Since we work on a uniformly refined mesh, no constraints exist or have been computed (i.e. there was no need to call DoFTools::make_hanging_node_constraints as in other programs), but we need a constraints object in one place further down below anyway. solution_u.reinit(dof_handler.n_dofs()); solution_v.reinit(dof_handler.n_dofs()); old_solution_u.reinit(dof_handler.n_dofs()); old_solution_v.reinit(dof_handler.n_dofs()); system_rhs.reinit(dof_handler.n_dofs()); constraints.close(); } #### WaveEquation::solve_u and WaveEquation::solve_v The next two functions deal with solving the linear systems associated with the equations for $$U^n$$ and $$V^n$$. Both are not particularly interesting as they pretty much follow the scheme used in all the previous tutorial programs. One can make little experiments with preconditioners for the two matrices we have to invert. As it turns out, however, for the matrices at hand here, using Jacobi or SSOR preconditioners reduces the number of iterations necessary to solve the linear system slightly, but due to the cost of applying the preconditioner it is no win in terms of run-time. It is not much of a loss either, but let's keep it simple and just do without: template <int dim> void WaveEquation<dim>::solve_u() { SolverControl solver_control(1000, 1e-8 * system_rhs.l2_norm()); SolverCG<Vector<double>> cg(solver_control); cg.solve(matrix_u, solution_u, system_rhs, PreconditionIdentity()); std::cout << " u-equation: " << solver_control.last_step() << " CG iterations." << std::endl; } template <int dim> void WaveEquation<dim>::solve_v() { SolverControl solver_control(1000, 1e-8 * system_rhs.l2_norm()); SolverCG<Vector<double>> cg(solver_control); cg.solve(matrix_v, solution_v, system_rhs, PreconditionIdentity()); std::cout << " v-equation: " << solver_control.last_step() << " CG iterations." << std::endl; } SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F) #### WaveEquation::output_results Likewise, the following function is pretty much what we've done before. The only thing worth mentioning is how here we generate a string representation of the time step number padded with leading zeros to 3 character length using the Utilities::int_to_string function's second argument. template <int dim> void WaveEquation<dim>::output_results() const { DataOut<dim> data_out; data_out.attach_dof_handler(dof_handler); data_out.build_patches(); const std::string filename = "solution-" + Utilities::int_to_string(timestep_number, 3) + ".vtu"; void attach_dof_handler(const DoFHandler< dim, spacedim > &) void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={}) virtual void build_patches(const unsigned int n_subdivisions=0) Definition: data_out.cc:1063 std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int) Definition: utilities.cc:473 Like step-15, since we write output at every time step (and the system we have to solve is relatively easy), we instruct DataOut to use the zlib compression algorithm that is optimized for speed instead of disk usage since otherwise plotting the output becomes a bottleneck: data_out.set_flags(vtk_flags); std::ofstream output(filename); data_out.write_vtu(output); } void write_vtu(std::ostream &out) const void set_flags(const FlagType &flags) DataOutBase::CompressionLevel compression_level #### WaveEquation::run The following is really the only interesting function of the program. It contains the loop over all time steps, but before we get to that we have to set up the grid, DoFHandler, and matrices. In addition, we have to somehow get started with initial values. To this end, we use the VectorTools::project function that takes an object that describes a continuous function and computes the $$L^2$$ projection of this function onto the finite element space described by the DoFHandler object. Can't be any simpler than that: template <int dim> { setup_system(); VectorTools::project(dof_handler, constraints, QGauss<dim>(fe.degree + 1), InitialValuesU<dim>(), old_solution_u); VectorTools::project(dof_handler, constraints, QGauss<dim>(fe.degree + 1), InitialValuesV<dim>(), old_solution_v); void project(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const AffineConstraints< typename VectorType::value_type > &constraints, const Quadrature< dim > &quadrature, const Function< spacedim, typename VectorType::value_type > &function, VectorType &vec, const bool enforce_zero_boundary=false, const Quadrature< dim - 1 > &q_boundary=(dim > 1 ? QGauss< dim - 1 >(2) :Quadrature< dim - 1 >(0)), const bool project_to_boundary_first=false) The next thing is to loop over all the time steps until we reach the end time ( $$T=5$$ in this case). In each time step, we first have to solve for $$U^n$$, using the equation $$(M^n + k^2\theta^2 A^n)U^n =$$ $$(M^{n,n-1} - k^2\theta(1-\theta) A^{n,n-1})U^{n-1} + kM^{n,n-1}V^{n-1} +$$ $$k\theta \left[k \theta F^n + k(1-\theta) F^{n-1} \right]$$. Note that we use the same mesh for all time steps, so that $$M^n=M^{n,n-1}=M$$ and $$A^n=A^{n,n-1}=A$$. What we therefore have to do first is to add up $$MU^{n-1} - k^2\theta(1-\theta) AU^{n-1} + kMV^{n-1}$$ and the forcing terms, and put the result into the system_rhs vector. (For these additions, we need a temporary vector that we declare before the loop to avoid repeated memory allocations in each time step.) The one thing to realize here is how we communicate the time variable to the object describing the right hand side: each object derived from the Function class has a time field that can be set using the Function::set_time and read by Function::get_time. In essence, using this mechanism, all functions of space and time are therefore considered functions of space evaluated at a particular time. This matches well what we typically need in finite element programs, where we almost always work on a single time step at a time, and where it never happens that, for example, one would like to evaluate a space-time function for all times at any given spatial location. Vector<double> tmp(solution_u.size()); Vector<double> forcing_terms(solution_u.size()); for (; time <= 5; time += time_step, ++timestep_number) { std::cout << "Time step " << timestep_number << " at t=" << time << std::endl; mass_matrix.vmult(system_rhs, old_solution_u); mass_matrix.vmult(tmp, old_solution_v); laplace_matrix.vmult(tmp, old_solution_u); system_rhs.add(-theta * (1 - theta) * time_step * time_step, tmp); RightHandSide<dim> rhs_function; rhs_function.set_time(time); QGauss<dim>(fe.degree + 1), rhs_function, tmp); forcing_terms = tmp; forcing_terms *= theta * time_step; rhs_function.set_time(time - time_step); QGauss<dim>(fe.degree + 1), rhs_function, tmp); forcing_terms.add((1 - theta) * time_step, tmp); void create_right_hand_side(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim > &q, const Function< spacedim, typename VectorType::value_type > &rhs, VectorType &rhs_vector, const AffineConstraints< typename VectorType::value_type > &constraints=AffineConstraints< typename VectorType::value_type >()) After so constructing the right hand side vector of the first equation, all we have to do is apply the correct boundary values. As for the right hand side, this is a space-time function evaluated at a particular time, which we interpolate at boundary nodes and then use the result to apply boundary values as we usually do. The result is then handed off to the solve_u() function: { BoundaryValuesU<dim> boundary_values_u_function; boundary_values_u_function.set_time(time); std::map<types::global_dof_index, double> boundary_values; 0, boundary_values_u_function, boundary_values); void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask=ComponentMask()) The matrix for solve_u() is the same in every time steps, so one could think that it is enough to do this only once at the beginning of the simulation. However, since we need to apply boundary values to the linear system (which eliminate some matrix rows and columns and give contributions to the right hand side), we have to refill the matrix in every time steps before we actually apply boundary data. The actual content is very simple: it is the sum of the mass matrix and a weighted Laplace matrix: matrix_u.copy_from(mass_matrix); matrix_u.add(theta * theta * time_step * time_step, laplace_matrix); matrix_u, solution_u, system_rhs); } solve_u(); void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true) Definition: matrix_tools.cc:76 The second step, i.e. solving for $$V^n$$, works similarly, except that this time the matrix on the left is the mass matrix (which we copy again in order to be able to apply boundary conditions, and the right hand side is $$MV^{n-1} - k\left[ \theta A U^n + (1-\theta) AU^{n-1}\right]$$ plus forcing terms. Boundary values are applied in the same way as before, except that now we have to use the BoundaryValuesV class: laplace_matrix.vmult(system_rhs, solution_u); system_rhs *= -theta * time_step; mass_matrix.vmult(tmp, old_solution_v); system_rhs += tmp; laplace_matrix.vmult(tmp, old_solution_u); system_rhs.add(-time_step * (1 - theta), tmp); system_rhs += forcing_terms; { BoundaryValuesV<dim> boundary_values_v_function; boundary_values_v_function.set_time(time); std::map<types::global_dof_index, double> boundary_values; 0, boundary_values_v_function, boundary_values); matrix_v.copy_from(mass_matrix); matrix_v, solution_v, system_rhs); } solve_v(); Finally, after both solution components have been computed, we output the result, compute the energy in the solution, and go on to the next time step after shifting the present solution into the vectors that hold the solution at the previous time step. Note the function SparseMatrix::matrix_norm_square that can compute $$\left<V^n,MV^n\right>$$ and $$\left<U^n,AU^n\right>$$ in one step, saving us the expense of a temporary vector and several lines of code: output_results(); std::cout << " Total energy: " << (mass_matrix.matrix_norm_square(solution_v) + laplace_matrix.matrix_norm_square(solution_u)) / 2 << std::endl; old_solution_u = solution_u; old_solution_v = solution_v; } } } // namespace Step23 ### The main function What remains is the main function of the program. There is nothing here that hasn't been shown in several of the previous programs: int main() { try { using namespace Step23; WaveEquation<2> wave_equation_solver; wave_equation_solver.run(); } catch (std::exception &exc) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Exception on processing: " << std::endl << exc.what() << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } catch (...) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Unknown exception!" << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } return 0; } # Results When the program is run, it produces the following output: Number of active cells: 16384 Number of degrees of freedom: 16641 Time step 1 at t=0.015625 u-equation: 8 CG iterations. v-equation: 22 CG iterations. Total energy: 1.17887 Time step 2 at t=0.03125 u-equation: 8 CG iterations. v-equation: 20 CG iterations. Total energy: 2.9655 Time step 3 at t=0.046875 u-equation: 8 CG iterations. v-equation: 21 CG iterations. Total energy: 4.33761 Time step 4 at t=0.0625 u-equation: 7 CG iterations. v-equation: 21 CG iterations. Total energy: 5.35499 Time step 5 at t=0.078125 u-equation: 7 CG iterations. v-equation: 21 CG iterations. Total energy: 6.18652 Time step 6 at t=0.09375 u-equation: 7 CG iterations. v-equation: 20 CG iterations. Total energy: 6.6799 ... Time step 31 at t=0.484375 u-equation: 7 CG iterations. v-equation: 20 CG iterations. Total energy: 21.9068 Time step 32 at t=0.5 u-equation: 7 CG iterations. v-equation: 20 CG iterations. Total energy: 23.3394 Time step 33 at t=0.515625 u-equation: 7 CG iterations. v-equation: 20 CG iterations. Total energy: 23.1019 ... Time step 319 at t=4.98438 u-equation: 7 CG iterations. v-equation: 20 CG iterations. Total energy: 23.1019 Time step 320 at t=5 u-equation: 7 CG iterations. v-equation: 20 CG iterations. Total energy: 23.1019 What we see immediately is that the energy is a constant at least after $$t=\frac 12$$ (until which the boundary source term $$g$$ is nonzero, injecting energy into the system). In addition to the screen output, the program writes the solution of each time step to an output file. If we process them adequately and paste them into a movie, we get the following: The movie shows the generated wave nice traveling through the domain and back, being reflected at the clamped boundary. Some numerical noise is trailing the wave, an artifact of a too-large mesh size that can be reduced by reducing the mesh width and the time step. ### Possibilities for extensions If you want to explore a bit, try out some of the following things: • Varying $$\theta$$. This gives different time stepping schemes, some of which are stable while others are not. Take a look at how the energy evolves. • Different initial and boundary conditions, right hand sides. • More complicated domains or more refined meshes. Remember that the time step needs to be bounded by the mesh width, so changing the mesh should always involve also changing the time step. We will come back to this issue in step-24. • Variable coefficients: In real media, the wave speed is often variable. In particular, the "real" wave equation in realistic media would read $\rho(x) \frac{\partial^2 u}{\partial t^2} - \nabla \cdot a(x) \nabla u = f,$ where $$\rho(x)$$ is the density of the material, and $$a(x)$$ is related to the stiffness coefficient. The wave speed is then $$c=\sqrt{a/\rho}$$. To make such a change, we would have to compute the mass and Laplace matrices with a variable coefficient. Fortunately, this isn't too hard: the functions MatrixCreator::create_laplace_matrix and MatrixCreator::create_mass_matrix have additional default parameters that can be used to pass non-constant coefficient functions to them. The required changes are therefore relatively small. On the other hand, care must be taken again to make sure the time step is within the allowed range. • In the in-code comments, we discussed the fact that the matrices for solving for $$U^n$$ and $$V^n$$ need to be reset in every time because of boundary conditions, even though the actual content does not change. It is possible to avoid copying by not eliminating columns in the linear systems, which is implemented by appending a false argument to the call: matrix_u, solution_u, system_rhs, false); • deal.II being a library that supports adaptive meshes it would of course be nice if this program supported change the mesh every few time steps. Given the structure of the solution — a wave that travels through the domain — it would seem appropriate if we only refined the mesh where the wave currently is, and not simply everywhere. It is intuitively clear that we should be able to save a significant amount of cells this way. (Though upon further thought one realizes that this is really only the case in the initial stages of the simulation. After some time, for wave phenomena, the domain is filled with reflections of the initial wave going in every direction and filling every corner of the domain. At this point, there is in general little one can gain using local mesh refinement.) To make adaptively changing meshes possible, there are basically two routes. The "correct" way would be to go back to the weak form we get using Rothe's method. For example, the first of the two equations to be solved in each time step looked like this: \begin{eqnarray*} (u^n,\varphi) + k^2\theta^2(\nabla u^n,\nabla \varphi) &=& (u^{n-1},\varphi) - k^2\theta(1-\theta)(\nabla u^{n-1},\nabla \varphi) + k(v^{n-1},\varphi) + k^2\theta \left[ \theta (f^n,\varphi) + (1-\theta) (f^{n-1},\varphi) \right]. \end{eqnarray*} Now, note that we solve for $$u^n$$ on mesh $${\mathbb T}^n$$, and consequently the test functions $$\varphi$$ have to be from the space $$V_h^n$$ as well. As discussed in the introduction, terms like $$(u^{n-1},\varphi)$$ then require us to integrate the solution of the previous step (which may have been computed on a different mesh $${\mathbb T}^{n-1}$$) against the test functions of the current mesh, leading to a matrix $$M^{n,n-1}$$. This process of integrating shape functions from different meshes is, at best, awkward. It can be done but because it is difficult to ensure that $${\mathbb T}^{n-1}$$ and $${\mathbb T}^{n}$$ differ by at most one level of refinement, one has to recursively match cells from both meshes. It is feasible to do this, but it leads to lengthy and not entirely obvious code. The second approach is the following: whenever we change the mesh, we simply interpolate the solution from the last time step on the old mesh to the new mesh, using the SolutionTransfer class. In other words, instead of the equation above, we would solve \begin{eqnarray*} (u^n,\varphi) + k^2\theta^2(\nabla u^n,\nabla \varphi) &=& (I^n u^{n-1},\varphi) - k^2\theta(1-\theta)(\nabla I^n u^{n-1},\nabla \varphi) + k(I^n v^{n-1},\varphi) + k^2\theta \left[ \theta (f^n,\varphi) + (1-\theta) (f^{n-1},\varphi) \right], \end{eqnarray*} where $$I^n$$ interpolates a given function onto mesh $${\mathbb T}^n$$. This is a much simpler approach because, in each time step, we no longer have to worry whether $$u^{n-1},v^{n-1}$$ were computed on the same mesh as we are using now or on a different mesh. Consequently, the only changes to the code necessary are the addition of a function that computes the error, marks cells for refinement, sets up a SolutionTransfer object, transfers the solution to the new mesh, and rebuilds matrices and right hand side vectors on the new mesh. Neither the functions building the matrices and right hand sides, nor the solvers need to be changed. While this second approach is, strictly speaking, not quite correct in the Rothe framework (it introduces an addition source of error, namely the interpolation), it is nevertheless what almost everyone solving time dependent equations does. We will use this method in step-31, for example. # The plain program /* --------------------------------------------------------------------- * * Copyright (C) 2006 - 2020 by the deal.II authors * * This file is part of the deal.II library. * * The deal.II library is free software; you can use it, redistribute * it, and/or modify it under the terms of the GNU Lesser General * version 2.1 of the License, or (at your option) any later version. * The full text of the license can be found in the file LICENSE.md at * the top level directory of deal.II. * * --------------------------------------------------------------------- * * Author: Wolfgang Bangerth, Texas A&M University, 2006 */ #include <fstream> #include <iostream> namespace Step23 { using namespace dealii; template <int dim> class WaveEquation { public: WaveEquation(); void run(); private: void setup_system(); void solve_u(); void solve_v(); void output_results() const; DoFHandler<dim> dof_handler; SparsityPattern sparsity_pattern; SparseMatrix<double> laplace_matrix; Vector<double> solution_u, solution_v; Vector<double> old_solution_u, old_solution_v; Vector<double> system_rhs; double time_step; double time; unsigned int timestep_number; const double theta; }; template <int dim> class InitialValuesU : public Function<dim> { public: virtual double value(const Point<dim> & /*p*/, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); return 0; } }; template <int dim> class InitialValuesV : public Function<dim> { public: virtual double value(const Point<dim> & /*p*/, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); return 0; } }; template <int dim> class RightHandSide : public Function<dim> { public: virtual double value(const Point<dim> & /*p*/, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); return 0; } }; template <int dim> class BoundaryValuesU : public Function<dim> { public: virtual double value(const Point<dim> & p, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); if ((this->get_time() <= 0.5) && (p[0] < 0) && (p[1] < 1. / 3) && (p[1] > -1. / 3)) return std::sin(this->get_time() * 4 * numbers::PI); else return 0; } }; template <int dim> class BoundaryValuesV : public Function<dim> { public: virtual double value(const Point<dim> & p, const unsigned int component = 0) const override { (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); if ((this->get_time() <= 0.5) && (p[0] < 0) && (p[1] < 1. / 3) && (p[1] > -1. / 3)) return (std::cos(this->get_time() * 4 * numbers::PI) * 4 * numbers::PI); else return 0; } }; template <int dim> WaveEquation<dim>::WaveEquation() : fe(1) , dof_handler(triangulation) , time_step(1. / 64) , time(time_step) , timestep_number(1) , theta(0.5) {} template <int dim> void WaveEquation<dim>::setup_system() { triangulation.refine_global(7); std::cout << "Number of active cells: " << triangulation.n_active_cells() << std::endl; dof_handler.distribute_dofs(fe); std::cout << "Number of degrees of freedom: " << dof_handler.n_dofs() << std::endl << std::endl; DynamicSparsityPattern dsp(dof_handler.n_dofs(), dof_handler.n_dofs()); sparsity_pattern.copy_from(dsp); mass_matrix.reinit(sparsity_pattern); laplace_matrix.reinit(sparsity_pattern); matrix_u.reinit(sparsity_pattern); matrix_v.reinit(sparsity_pattern); QGauss<dim>(fe.degree + 1), QGauss<dim>(fe.degree + 1), laplace_matrix); solution_u.reinit(dof_handler.n_dofs()); solution_v.reinit(dof_handler.n_dofs()); old_solution_u.reinit(dof_handler.n_dofs()); old_solution_v.reinit(dof_handler.n_dofs()); system_rhs.reinit(dof_handler.n_dofs()); constraints.close(); } template <int dim> void WaveEquation<dim>::solve_u() { SolverControl solver_control(1000, 1e-8 * system_rhs.l2_norm()); SolverCG<Vector<double>> cg(solver_control); cg.solve(matrix_u, solution_u, system_rhs, PreconditionIdentity()); std::cout << " u-equation: " << solver_control.last_step() << " CG iterations." << std::endl; } template <int dim> void WaveEquation<dim>::solve_v() { SolverControl solver_control(1000, 1e-8 * system_rhs.l2_norm()); SolverCG<Vector<double>> cg(solver_control); cg.solve(matrix_v, solution_v, system_rhs, PreconditionIdentity()); std::cout << " v-equation: " << solver_control.last_step() << " CG iterations." << std::endl; } template <int dim> void WaveEquation<dim>::output_results() const { DataOut<dim> data_out; data_out.attach_dof_handler(dof_handler); data_out.build_patches(); const std::string filename = "solution-" + Utilities::int_to_string(timestep_number, 3) + ".vtu"; data_out.set_flags(vtk_flags); std::ofstream output(filename); data_out.write_vtu(output); } template <int dim> { setup_system(); VectorTools::project(dof_handler, constraints, QGauss<dim>(fe.degree + 1), InitialValuesU<dim>(), old_solution_u); VectorTools::project(dof_handler, constraints, QGauss<dim>(fe.degree + 1), InitialValuesV<dim>(), old_solution_v); Vector<double> tmp(solution_u.size()); Vector<double> forcing_terms(solution_u.size()); for (; time <= 5; time += time_step, ++timestep_number) { std::cout << "Time step " << timestep_number << " at t=" << time << std::endl; mass_matrix.vmult(system_rhs, old_solution_u); mass_matrix.vmult(tmp, old_solution_v); laplace_matrix.vmult(tmp, old_solution_u); system_rhs.add(-theta * (1 - theta) * time_step * time_step, tmp); RightHandSide<dim> rhs_function; rhs_function.set_time(time); QGauss<dim>(fe.degree + 1), rhs_function, tmp); forcing_terms = tmp; forcing_terms *= theta * time_step; rhs_function.set_time(time - time_step); QGauss<dim>(fe.degree + 1), rhs_function, tmp); forcing_terms.add((1 - theta) * time_step, tmp); { BoundaryValuesU<dim> boundary_values_u_function; boundary_values_u_function.set_time(time); std::map<types::global_dof_index, double> boundary_values; 0, boundary_values_u_function, boundary_values); matrix_u.copy_from(mass_matrix); matrix_u.add(theta * theta * time_step * time_step, laplace_matrix); matrix_u, solution_u, system_rhs); } solve_u(); laplace_matrix.vmult(system_rhs, solution_u); system_rhs *= -theta * time_step; mass_matrix.vmult(tmp, old_solution_v); system_rhs += tmp; laplace_matrix.vmult(tmp, old_solution_u); system_rhs.add(-time_step * (1 - theta), tmp); system_rhs += forcing_terms; { BoundaryValuesV<dim> boundary_values_v_function; boundary_values_v_function.set_time(time); std::map<types::global_dof_index, double> boundary_values; 0, boundary_values_v_function, boundary_values); matrix_v.copy_from(mass_matrix); matrix_v, solution_v, system_rhs); } solve_v(); output_results(); std::cout << " Total energy: " << (mass_matrix.matrix_norm_square(solution_v) + laplace_matrix.matrix_norm_square(solution_u)) / 2 << std::endl; old_solution_u = solution_u; old_solution_v = solution_v; } } } // namespace Step23 int main() { try { using namespace Step23; WaveEquation<2> wave_equation_solver; wave_equation_solver.run(); } catch (std::exception &exc) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Exception on processing: " << std::endl << exc.what() << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } catch (...) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Unknown exception!" << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } return 0; } std::string get_time() Definition: utilities.cc:1016
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933013379573822, "perplexity": 1834.1866167293285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00143.warc.gz"}
https://www.physicsforums.com/threads/electricity-and-resistance-question.4599/
Electricity and resistance Question 1. Aug 7, 2003 PerpetuallyFrustrate A bird stands on a electric transmission line carring 2500 A. The line has .000025 ohms resistance per meter and the bird's feet are 4 cm apart. What voltage does the bird feel? I know R = p L/A, but how do i figure out the area. Also do I use 4 cm as the length? 2. Aug 7, 2003 HInt Could you figure the voltage if the bird had its feet 1 meter apart? 3. Aug 7, 2003 PerpetuallyFrustrate No I dont understand whether the 4 cm is the L and if so then what is the area? 4. Aug 8, 2003 futz If you know the resistance per meter (0.000025) and the number of meters (0.04) their product should give you the total resistance in the 4 cm length. Knowing the current, it should be a staightforward use of Ohm's Law to get the voltage across that section of the wire. I'm not sure why you would need to use the area at all. Similar Discussions: Electricity and resistance Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600179553031921, "perplexity": 1249.2494586628404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608617.6/warc/CC-MAIN-20170525214603-20170525234603-00456.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/2009_AMC_12A_Problems/Problem_12
# 2009 AMC 12A Problems/Problem 12 ## Problem How many positive integers less than are times the sum of their digits? ## Solution ### Solution 1 The sum of the digits is at most . Therefore the number is at most . Out of the numbers to the one with the largest sum of digits is , and the sum is . Hence the sum of digits will be at most . Also, each number with this property is divisible by , therefore it is divisible by , and thus also its sum of digits is divisible by . Thus, the number is divisible by . We only have six possibilities left for the sum of the digits: , , , , , and , but since the number is divisible by , the digits can only add to or . This leads to the integers , , , , , and being possibilities. We can check to see that solution: the number is the only solution that satisfies the conditions in the problem. ### Solution 2 We can write each integer between and inclusive as where and . The sum of digits of this number is , hence we get the equation . This simplifies to . Clearly for there are no solutions, hence and we get the equation . This obviously has only one valid solution , hence the only solution is the number . ### Solution 3 The sum of the digits is at most . Therefore the number is at most . Since the number is times the sum of its digits, it must be divisible by , therefore also by , therefore the sum of its digits must be divisible by . With this in mind we can conclude that the number must be divisible by , not just by . Since the number is divisible by , it is also divisible by , therefore the sum of its digits is divisible by , therefore the number is divisible by , which leaves us with , and . Only is times its digits, hence the answer is .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738010764122009, "perplexity": 171.2267948753141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00410.warc.gz"}
https://www.physicsforums.com/threads/parabolic-cylinder-line-integral.231358/
Parabolic Cylinder line integral 1. Apr 26, 2008 christopnz 1.The problem statement, all variables and given/known data (Parabolic Cylinder) find the area of the surface extending upward form x^2 + y^2 =1 to z = 1 - x^2 using line integral 2. Could some one please outline the method to solving this. I tryed using spherical corridinates but am unsure if this was correcect 3. The attempt at a solution Last edited: Apr 26, 2008 2. Apr 26, 2008 HallsofIvy Staff Emeritus Surely, since this is a cylinder, cylindrical coordinates would be better? That is, use polar coordinates for two coordinates, z for the third. $x= r cos(\theta)$, $y= r sin(\theta)$ so you will be integrating $z= 1- x^2= 1- r^2 cos^2(\theta)$ over the unit circle. 3. Apr 26, 2008 christopnz ty that helped alot Similar Discussions: Parabolic Cylinder line integral
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829309582710266, "perplexity": 1657.8562737481452}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00493.warc.gz"}
https://socratic.org/questions/given-any-sinusoidal-equation-how-do-you-identify-the-type-of-transformations-th
Trigonometry Topics # Given any sinusoidal equation, how do you identify the type of transformations that are made? Mar 12, 2018 Example: Describe the transformations to get $g \left(x\right) = 2 \sin \left(3 \left(x + \frac{\pi}{4}\right)\right) + 2$ from $f \left(x\right) = \sin x$ Here are what each of the parameters in the equation $y = a \sin \left(b \left(x - c\right)\right) + d$: $a \to$ vertical stretch $\frac{1}{b} \to$horizontal stretch $c \to$ phase shift $d \to$vertical transformation So in the given equation, we have a vertical stretch by a factor of $2$, a horizontal stretch by a factor of $\frac{1}{3}$, a transformation $\frac{\pi}{4}$ units left and a transformation $2$ units up. Hopefully this helps! ##### Impact of this question 1034 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936090350151062, "perplexity": 816.8829259461846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00491.warc.gz"}
https://blog.r6l7.com/tag/sysadmin/
Well, that was an ordeal. It took the better part of a week, but I have a server again. This blogging platform was not part of »
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259883522987366, "perplexity": 980.5060425498991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00295.warc.gz"}
http://mathhelpforum.com/pre-calculus/17976-linear-programming-using-graphs.html
# Math Help - Linear Programming using graphs 1. ## Linear Programming using graphs a) Use graphical methods to determine the feasible region for these constraints: Subject to 2x1+5x2 is greater than or equal to 10 2x1 - x2 is less than or equal to 6 x1 is greater than or equal to 1 x1 + x2 is less or equal to 6 (the 1 & 2 after the x is small which sits just below the x) b) subject to the constraints in a), i) minimise P=x2 ii) maximise Q=10x1 + 5x2 2. This is good for practice. It will be long if I show it all, so let me assume you know how to graph inequalities and how to find the intersection of two inequalities. I don't know how to sketch figures here so sketch it on paper. Subject to 2x1+5x2 is greater than or equal to 10 The x1 and x2 will confuse you no end, so let x = x1, and y = x2. So the four constraints are: 2x +5y >= 10 -------(1) 2x -y <= 6 ----------(2) x >= 1 --------------(3) x +y <= 6 -----------(4) Plot the 4 inequalities on the same x,y rectangular axes. You'd find that the feasible region is a quadrilateral whose four corner points are (1,1.6) .......or (1,8/5) (1,5) (4,2) (2.67,0.67)....or (8/3,2/3) i) minimise P=x2 Or, minimize P = y. The corner point with the lowest y is (8/3,2/3). Therefore, minimum P = 2/3 --------------------------answer. ii) maximise Q=10x1 + 5x2 Or, maximize Q = 10x +5y -------------** You have to test that to all of the 4 corner points to see which corner gives the highest Q. ----at (1,8/5)---- Q = 10(1) +5(8/5) = 18 ----at (1,5)------ Q = 10(1) +5(5) = 35 ----at (4,2)------ Q = 10(4) +5(2) = 50 ----at (8/3,2/3)-- Q = 10(8/3) +5(2/3) = 90/3 = 30 Therefore, maximum Q is 50. -------------------------answer. If anything is not clear, ask me. 3. Hello, tondie2! Use graphical methods to determine the feasible region for these constraints: . . $\begin{array}{cc}2x+5y \:\geq\:10 & [1] \\ 2x - y \:\leq \:6 & [2]\\ x + y \:\leq\:6 & [3]\\ x \:\geq \:1 & [4]\end{array}$ Graph the line of [1]. It has intercepts: $(5,0),\;(0,2)$ Shade the region above the line. Graph the line of [2]. It has intercepts: $(3,0),\;(0,-6)$ Shade the region above the line. Graph the line of [3]. It has intercepts: $(6,0),\;(0,6)$ Shade the region below the line. Graph the line of [4]. It is a vertical line with x-intercept $(1,0)$. Shade the region to the right of the line. As ticbol pointed out, the region is a quadrilateral. . . But I differ on one vertex. $[1] \cap [2]\!:\;\left(\frac{10}{3},\,\frac{2}{3}\right)$ $[1] \cap [4]\!:\;\left(1,\,\frac{8}{5}\right)$ $[2] \cap [3]\!:\;(4,\,2)$ $[3] \cap [4]\!:\;(1,\,5)$ 4. Originally Posted by ticbol This is good for practice. It will be long if I show it all, so let me assume you know how to graph inequalities and how to find the intersection of two inequalities. I don't know how to sketch figures here so sketch it on paper. Originally Posted by Soroban As ticbol pointed out, the region is a quadrilateral. But I differ on one vertex. $[1] \cap [2]\!:\;\left(\frac{10}{3},\,\frac{2}{3}\right)$ $[1] \cap [4]\!:\;\left(1,\,\frac{8}{5}\right)$ $[2] \cap [3]\!:\;(4,\,2)$ $[3] \cap [4]\!:\;(1,\,5)$ If you want to draw graphs badly enough, the picture environment isn't too awful. Here is the quadrilateral. $\setlength{\unitlength}{1cm} \begin{picture}(4,4) \qbezier(3.33,.667)(3.33,.667)(1,1.6) \qbezier(4,2)(4,2)(3.33,.667) \qbezier(4,2)(4,2)(1,5) \qbezier(1,1.6)(1,1.6)(1,5) \qbezier(0,0)(0,0)(4,0) \qbezier(0,0)(0,0)(0,5) \end{picture} $ Below is the LaTeX code. I have something like this stored in a document. I cut and pasted it here and then modified it. The lines are drawn by qbezier. \qbezier(X,Y)(X,Y)(V,W) draws a line beteen points (X,Y) and (V,W). Note the (X,Y) is repeated. You could also repeat the (V,W) with same effect and the order of the points does not matter. The \setlength, \begin{picture} and \end{picture} commands are just a little housekeeping that needs to be done. The (4,4) in \begin{picture} sets the size of the axes. The \setlength sets the size of the picture. If LaTeX complains about the size of the image, reduce the length. Code: \setlength{\unitlength}{1cm} \begin{picture}(4,4) \qbezier(3.33,.667)(3.33,.667)(1,1.6) \qbezier(4,2)(4,2)(3.33,.667) \qbezier(4,2)(4,2)(1,5) \qbezier(1,1.6)(1,1.6)(1,5) \qbezier(0,0)(0,0)(4,0) \qbezier(0,0)(0,0)(0,5) \end{picture} The first 4 qbeziers draw the quadrilateral. I just put the coordinates in straight from Soroban's post. The second 2 qbeziers draw the axes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385932445526123, "perplexity": 2109.190080188846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098685.20/warc/CC-MAIN-20150627031818-00250-ip-10-179-60-89.ec2.internal.warc.gz"}
https://gasstationwithoutpumps.wordpress.com/2017/06/23/uncompensated-transimpedance-amplifier/
Gas station without pumps 2017 June 23 Uncompensated transimpedance amplifier Filed under: Circuits course — gasstationwithoutpumps @ 16:07 Tags: In my book Applied Electronics for Bioengineers, I have students build transimpedance amplifiers for phototransistors (and some students use them for electret microphones as well).  In the book, I never deal with compensating the transimpedance amplifiers to avoid oscillation, as I try to stay away from students having to reason about phase of signals and oscillation has never been a problem in the student designs. But I thought that I ought to understand the method myself, especially if I need to help students trying to do higher bandwidth, higher gain transimpedance amplifiers.  First I read up on the subject—one of the better introductions is the Maxim application note 5129 Stabilize your transimpedance amplifier.  The key concepts are the following: • When the frequency is high enough (where the open-loop gain is limited by the gain-bandwidth product) the phase change of the amplifier is about –90° (or 90° for the negative input). • If we set up a transimpedance amplifier with feedback resistor R, then the feedback consists of a low-pass RC filter: a voltage divider with R on tap and the input capacitance of the amplifier and any capacitance in parallel with the current source on the bottom. • The phase change of a low-pass RC filter (gain $\frac{1}{1+j\omega RC}$) approaches –90° above the corner frequency. • Having a phase change of 0° and gain ≥ 1 around a feedback loop results in instability and possible oscillation. That means that we can have instability at frequencies between $\frac{1}{2\pi RC}$ and the gain-bandwidth product (though we probably only have problems for frequencies at least a factor of 3 above the low-pass corner frequency, since the phase change of the filter is only asymptotically –90°).  If the parasitic capacitances are low and we only request small transimpedance gain, then RC is small, and the corner frequency of the low-pass filter is above the gain-bandwidth product, so there are no problems.  Will the students ever encounter problems? Today I tried to make an unstable transimpedance amplifier using the MCP6004 op amps that we use in class.  The op amps have a gain-bandwidth product of 1MHz, so I needed an RC time constant much larger than 160ns.  I chose 2MΩ and 47nF for an RC time constant of 94 ms and a corner frequency of 1.69Hz. The very large bypass capacitors are to make sure that there are no sneak paths through the power supply and positive input—to make sure that I’m looking at the phenomenon I’m really interested in. I connected the amplifier up to the Analog Discovery 2, and I definitely got instability: There does seem to be a somewhat unstable oscillation happening. The reasoning about the amplifier instability suggests that the oscillation should be at about the frequency where the gain around the loop is 1, that is where $\frac{f_{GBW}}{f}\frac{1}{2\pi f RC}=1$ or $f= \sqrt{\frac{f_{GBW}}{2\pi RC}}$. For the circuit I made, that would be around $\sqrt{1MHz \; 1.69Hz}= 1.3kHz$. I did some FFTs of the waveform (averaging over hundreds of traces to reduce noise, since the signal is fluctuating). The peak is around 1380Hz, very close to the predicted oscillation frequency. Also visible are harmonics of 60Hz, which are the correct output of the transimpedance amplifier (picking up stray currents by capacitive coupling). To compensate a transimpedance amplifier, we need to add a small capacitor in parallel with the feedback resistor, making the gain of the feedback filter $\frac{1+j\omega R_{F}C_{F}}{1+j\omega R_{F}(C_{F}+C_{i})}$, where $R_{F}$ and $C_{F}$ are the feedback components and $C_{i}$ is the input capacitance. For “optimal” compensation, we want to set the upper corner frequency $1/(2\pi R_{F}C_{F})$ at the geometric mean of the lower corner frequency $1/(2\pi R_{F}(C_{F} + C_{i}))$ and the gain-bandwidth product $f_{GBW}$. Using a larger capacitor (overcompensating) increases the phase margin (thus allowing for some variation from specs) at the cost of reducing the bandwidth of the final amplifier. We can set the equation up as $1/(2\pi R_{F}C_{F})^2 = f_{GBW}/(2\pi R_{F}(C_{F} + C_{i}))$, which we can simplify by assuming that $C_{i} \gg C_{F}$ to get $C_{F} = \sqrt{ \frac{C_{i}}{2 \pi R_{F}f_{GBW}}}$, which for my design comes to 61pF. A 68pF compensation capacitor cuts out the oscillation peak, but there is still a fair amount of noise around the corner frequency of the amplifier (1.2kHz). Overcompensating with a 680pF capacitor reduces the noise substantially, but the bandwidth is reduced to 120Hz. I also tried a somewhat more realistic example, with only a 2.2nF input capacitance, which calls for about a 13pF compensation capacitor. A 20pF capacitor does fine: The oscillation is well suppressed by the compensation capacitor. Now I have to decide how much (if any) of this to include in my book. Perhaps it can be an optional “advanced” section in the transimpedance amplifier chapter?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585023283958435, "perplexity": 1216.891111623607}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00211.warc.gz"}