url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/calculus/138293-isolating-variable.html
# Math Help - Isolating a Variable 1. ## Isolating a Variable Hi, I am trying to do this question for a long time now, and i cant figure it out. F(x) = 2x^2-x^3; isolate x using any way you can think of. P.S: Factoring, and log/Ln doesn't work. I thought this couldn't be solved but apparently it can. 2. Are you sure factoring doesn't work? I factor $x^2$ out to get $x^2(2-x)$. So the roots are at x = 0, 2. 3. ## solve for x? Originally Posted by saurabh91 Hi, I am trying to do this question for a long time now, and i cant figure it out. F(x) = 2x^2-x^3; isolate x using any way you can think of. P.S: Factoring, and log/Ln doesn't work. I thought this couldn't be solved but apparently it can. Hi do you mean solve for x? eg. x=2? 4. I believe they mean to solve for x as a function of F(x) i.e. invert the function. It is cubic in x, so the solution is known, but its not pretty, I can tell you that (Wolfram Alpha can do it). 5. Originally Posted by saurabh91 Hi, I am trying to do this question for a long time now, and i cant figure it out. F(x) = 2x^2-x^3; isolate x using any way you can think of. P.S: Factoring, and log/Ln doesn't work. I thought this couldn't be solved but apparently it can. Dear saurabh91, I think you want to subject x in this equation is'nt?? I don't have a hint of how to get the answer but I will let you know when I do. Anyway I will give you the answer which I obtained by using the "Microsoft math".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853046298027039, "perplexity": 505.76274428823825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296462.22/warc/CC-MAIN-20150323172136-00217-ip-10-168-14-71.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/91369-margin-error.html
# Math Help - Margin of error 1. ## Margin of error Anxiety levels are usually elevated in persons with low-esteem, although there are gender differences in the extent to which this is true. In a random sample of 24 17-year olds females with known low self-esteem the average anxiety score on a standard test was 7.62 with a standard deviation of 3.45. Construct a 95% confidence interval for the true mean anxiety score on this scale for 17-year-old females with self-esteem problems. What is the margin of error associated with this confidence interval? I've already done the 95% confidence interval and got (6.16, 9.08) but I'm a little stuck on the margin of error part. There's a formula Z(standard deviation/square root n). 2. Originally Posted by brumby_3 Anxiety levels are usually elevated in persons with low-esteem, although there are gender differences in the extent to which this is true. In a random sample of 24 17-year olds females with known low self-esteem the average anxiety score on a standard test was 7.62 with a standard deviation of 3.45. Construct a 95% confidence interval for the true mean anxiety score on this scale for 17-year-old females with self-esteem problems. What is the margin of error associated with this confidence interval? I've already done the 95% confidence interval and got (6.16, 9.08) but I'm a little stuck on the margin of error part. There's a formula Z(standard deviation/square root n). Read this: Statistics Tutorial: Confidence Interval 3. I guess we have normality and you're using a t with n-1=23 degrees of freedom? I'm not really happy about using a z (st. normal) with a small sample.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087912440299988, "perplexity": 867.2876401954139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/371275/variance-of-mle-for-the-common-mean-of-two-normals?answertab=votes
# Variance of MLE for the common mean of two normals I have the following problem. $$X_i \overset{IID}{\sim} Normal(\mu, \sigma_1^2)$$, $$Y_j \overset{IID}{\sim} Normal(\mu, \sigma_2^2), i = 1, \cdots, m, j=1, \cdots n$$ Find the MLE for the $$\hat{\mu}^{MLE}$$, where both $$\sigma_1^2, \sigma_2^2$$ are unknown. I bumped into the following old paper : Estimating the Common Mean of Several Normal Populations. I understand they suggest $$\hat{\mu}^{MLE} = \frac {m \bar{x}/s_1^2 + n \bar{y}/s_2^2} {1/s_1^2 + 1/s_2^2}$$ where $$s_1^2 = \frac{\sum_{i=1}^m (x_i - \bar{x})^2}{m-1}, s_2^2 = \frac{\sum_{j=1}^n (y_j - \bar{y})^2}{n-1}$$, the sample variance of each. I understand $$\mathbb{E}(\hat{\mu}^{MLE}) = \mu$$, because sample mean and sample variance is are independent in IID normals, we have $$\mathbb{E}(\hat{\mu}^{MLE}) = \mathbb{E}\left[ \mathbb{E}(\hat{\mu}^{MLE}| s_1^2, s_2^2) \right] = \mathbb{E}\left[ \mu \right] = \mu$$ But the variance of $$\hat{\mu}^{MLE}$$ was not suggested. My attempt $$\mathrm{Var}(\hat{\mu}^{MLE}) = \mathbb{E}(\mathrm{Var}(\hat{\mu}^{MLE}|s_1^2, s_2^2)) + \mathrm{Var} (\mathbb{E}(\hat{\mu}^{MLE}|s_1^2, s_2^2)))$$ The second term is zero, because conditional mean yields constant. But the mean of variance part, I have variance $$\mathrm{Var}(\hat{\mu}^{MLE}|s_1^2, s_2^2) = A^2 m \sigma_1^2 + (1-A)^2 n \sigma_2^2$$ where $$A = \frac {1/s_1^2} {1/s_1^2 + 1/s_2^2}$$. But I can't proceed. Is there anyone to help me with algebra/Any papers to refer to? • Is this not covered by the answer to your earlier question stats.stackexchange.com/questions/370946/… ?? – Glen_b Oct 11 '18 at 5:35 • @Glen_b Well, I think I found MLE, but even after some algebraic justfications, there still remains the problem of interval estimation. I'm asking if there is any closed form for the variance of the above MLE. So I don't think this post to be duplicate. – moreblue Oct 11 '18 at 5:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898092746734619, "perplexity": 571.9567195417011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00426.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/i-don-t-formulas-used-solve-question-book-tell-formulas-used-solve-thisquestion-q732741
How to solve this question? I don't get the formulas used to solve this question in the book.Can you please tell me which formulas were used to solve thisquestion?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949240684509277, "perplexity": 129.321225909517}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704943681/warc/CC-MAIN-20130516114903-00081-ip-10-60-113-184.ec2.internal.warc.gz"}
http://clay6.com/qa/1127/using-the-method-of-integration-find-the-area-of-the-region-bounded-by-line
Browse Questions Using the method of integration find the area of the region bounded by lines: $2x + y = 4, 3x - 2y = 6$ and $x - 3y + 5 = 0$ Toolbox: • Suppose three lines intersect at three different points,the enclosed area will be the area bounded by the three lines at their points intersection. • The limits can be obtained by solving these equations finding the points of intersection. The graph for the three lines 2x+y=4,3x-2y=6 and x-3y+5=0 can be sketched and drawn as shown in th fig. The required area is the region bounded by the lines in the area of the $\bigtriangleup ABC.$ To find the limits,let us find the pints of intersection of the lines. Let 2x+y=4------(1) $\;\;\;\;$3x-2y=6------(2) $\;\;\;\;$x-3y=-5------(3) on solving (1) and (2) we get $(\times 3)\;$2x+ y=4 $(\times 2)\;$3x-2y=6 --------------------- $\;$6x+3y=12 -6x+3y=12 -------------------- $\;\;\;\;\;$7y= 0 $\;\;\;\;\;\;$y= 0 $\;\;\;\;\;\;$x= 2 Hence the point of intersection for line (1) and (2) is (2,0) on solving (2) and (3) we get $\;\;\;\;\;\;$3x-2y= 6 $(\times 3)\;$ x-3y =-5 ------------------------- $\;\;\;\;\;$3x-2y= 6 $\;\;\;\;\;$3x-9y=-15 ----------------------- $\;\;\;\;\;\;\;$7y=21 $\;\;\;\;\;\;\;$y= 3.Hence x=4 Hence the point of intersection between emu(2) and equ(3) is (4,3) on solving equ(3) and (1) we get $(\times 2)$x-3y=-5 $\;\;\;\;\;$2x+ y=4 ---------------------- $\;\;\;\;\;$2x-6y=-10 $\;\;\;\;\;$2x+ y= 4 ---------------------- $\;\;\;\;\;$-7y=-14 $\;\;\;\;\;$y=2.Hence x=1 Hence the point of intersection between emu(3) and emu(1) is (1,2). Hence the required area A is A=(Area enclosed between the line AC and x-axis)+(Area enclosed between the line AB and x-axis)+(Area enclosed between the line BC and x-axis) $A=\int_1^4\bigg(\frac{x+5}{3}\bigg)dx+\int_2^1(4-2x)dx+\int_4^2\bigg(\frac{3x-6}{2}\bigg)dx$ $A=\int_1^4\bigg(\frac{x+5}{3}\bigg)dx-\int_1^2(4-2x)dx-\int_2^4\bigg(\frac{3x-6}{2}\bigg)dx$ on integrating we get, $A=\frac{1}{3}\begin{bmatrix}\frac{x^2}{2}+5x\end{bmatrix}_1^4-\begin{bmatrix}4x-x^2\end{bmatrix}_1^2-\frac{1}{3}\begin{bmatrix}\frac{3x^2}{2}-6x\end{bmatrix}_2^4$ $A=\frac{1}{3}[8+20- {\frac{1}{2}}-5]_1^4-[8-4-4+1]_1^2- {\frac{1}{2}}[24-24-6+12]_2^4$ $A=(\frac{1}{3}\times \frac{45}{2}-(1)-\frac{1}{2}(6)$ $A=\frac{15}{2}-1-3$ $\;\;\;=\frac{7}{2}sq.units$ Hence the required area is $\frac{7}{2}sq. units.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898414969444275, "perplexity": 1773.9351600233008}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719463.40/warc/CC-MAIN-20161020183839-00342-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-spring-is-attached-to-the-ceiling-by-a-string-with-no-weights.721889/
# A spring is attached to the ceiling by a string with no weights 1. Nov 10, 2013 ### vstrimaitis 1. A cylindrical string is made out of thin wire. • The distance between every loop of an unstretched spring is equal; • The radius of every loop of the spring is r = 4 cm; • The length of an unstretched spring is l = 20 cm; • The mass of the spring is m = 50 g; The spring is hung on the ceiling by a non-elastic string which has a length of a = 10 cm. When the string is hanging, it has the length of l' = 25 cm. 2. • What is the constant of the spring (k)? • What is the period of this system, if the angle, at which it is released, is small? 3. I haven't really solved any problems when the weight is not attached on the end of the spring. Does it work the same way or not? If it does, I'll be able to find k. But I have no idea what to do with the period... Any help at all would be appreciated ^^ 2. Nov 10, 2013 ### Staff: Mentor Please add a sketch of the system. It is hard to visualize this based on your post. If something is connected to a point within the spring, you can split the spring in two pieces and consider them as two springs. 3. Nov 10, 2013 ### vstrimaitis I hope this will clarify the problem at least a little bit. #### Attached Files: • ###### spring.jpg File size: 5.4 KB Views: 71 4. Nov 10, 2013 ### Staff: Mentor Then you'll have to consider the weight of the spring. For a length x in the unstretched spring (where you need some definition of x), what is the mass below that point? What is the stretching force there? If the total spring has a constant of D, what can you say about each point, and finally the stretching of the whole spring?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8350141644477844, "perplexity": 493.7915377670797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646375.29/warc/CC-MAIN-20180319042634-20180319062634-00486.warc.gz"}
https://eccc.weizmann.ac.il/keyword/17271/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > RANDOM LINEAR CODES: Reports tagged with random linear codes: TR10-003 | 6th January 2010 Venkatesan Guruswami, Johan Håstad, Swastik Kopparty #### On the List-Decodability of Random Linear Codes For every fixed finite field $\F_q$, $p \in (0,1-1/q)$ and $\varepsilon > 0$, we prove that with high probability a random subspace $C$ of $\F_q^n$ of dimension $(1-H_q(p)-\varepsilon)n$ has the property that every Hamming ball of radius $pn$ has at most $O(1/\varepsilon)$ codewords. This ... more >>> TR21-139 | 24th September 2021 Venkatesan Guruswami, Jonathan Mosheiff #### Punctured Large Distance Codes, and Many Reed-Solomon Codes, Achieve List-Decoding Capacity Revisions: 2 We prove the existence of Reed-Solomon codes of any desired rate $R \in (0,1)$ that are combinatorially list-decodable up to a radius approaching $1-R$, which is the information-theoretic limit. This is established by starting with the full-length $[q,k]_q$ Reed-Solomon code over a field $\mathbb{F}_q$ that is polynomially larger than the ... more >>> ISSN 1433-8092 | Imprint
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654530048370361, "perplexity": 1557.3977196367505}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00330.warc.gz"}
https://www.lessonplanet.com/teachers/similarity-math-10th
# Similarity ##### This Similarity worksheet also includes: In this similarity worksheet, 10th graders solve and complete 14 different types of problems. First, they find the length of each segment given two lines parallel. Then, students find the measure of each segment of a parallelogram illustrated. They also prove a triangle congruent.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102812170982361, "perplexity": 1450.8421966614585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863689.50/warc/CC-MAIN-20180520205455-20180520225455-00615.warc.gz"}
http://mathhelpforum.com/calculus/53303-derrivative.html
# Math Help - Derrivative 1. ## Derrivative Determine whether f'(0) exists. f(x)={ xsin(1/x) if x does not =0 { { 0 if x=0 { Sorry but its suposed to be a piecewise function. 2. Originally Posted by johntuan Determine whether f'(0) exists. f(x)={ xsin(1/x) if x does not =0 { { 0 if x=0 { Sorry but its suposed to be a piecewise function. By definition, $f'(0) = \lim_{x\to 0} \frac{f(x) - f(0)}{x - 0} = \lim_{x\to 0} \sin \frac{1}{x}$ But this limit does not exist. Why is that? 3. because if u plug in x the denominator is 0? 4. The limit can exist even if the function is not defined at that point.. ..there is another reason.. 5. im still not too sure but i think it has something to do with the sin right? 6. Actually there is a point which is seen hard Assume if you put in sin1/x any number , it goes a range if you put it in 0 ''zero '' is criticall 1/x x=0 result of this is the infinitesimal sin inf. is not available but if you think that sin x for every x value goes [-1,1] we all know there is no chance for sinx and if we put ( x=inf. ) it always goes between [-1,1] range but none of them is not taken by it so we account its range ''not available'' this equation helps us generally limit and derivative problems
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944391131401062, "perplexity": 1292.1387262647136}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00435-ip-10-180-136-8.ec2.internal.warc.gz"}
https://hbfs.wordpress.com/2013/03/12/compressed-series-part-ii/
## Compressed Series (Part II) Last week we looked at an alternative series to compute $e$, and this week we will have a look at the computation of $e^x$. The usual series we learn in calculus textbook is given by $\displaystyle e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$ We can factorize the expression as $\displaystyle e^x=\sum_{n=0}^\infty \frac{x^n}{n!}=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots$ We can make the factorials and power disappear. Indeed, we can rewrite the above as $\displaystyle =1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots$ $\displaystyle =1+x+\frac{x^2}{2!}+\frac{x^3}{3!}\big(1+\frac{x}{4}\big(+\cdots$ $\displaystyle =1+x+\frac{x^2}{2!}\big(1+\frac{x}{3}\big(1+\frac{x}{4}\big(+\cdots$ $\displaystyle =1+x\big(1+\frac{x}{2}\big(1+\frac{x}{3}\big(1+\frac{x}{4}\big(+\cdots$ This series already seems much easier to compute. A straightforward C implementation leads to something like float_x naive_exp(float_x x, int nb_terms) { float_x s=1; float_x n=1; float_x xx=1; for (int i=1;i<=nb_terms;i++) { n*=i; xx*=x; s+=xx/n; } return s; } Let us now look at series compression in this case. It’s a bit more laborious than for the series for $e$. Indeed, if we group terms two by two, we find, somewhere in the series, the successive terms $\displaystyle \cdots+\left(\frac{x^n}{n!}+\frac{x^{n+1}}{(n+1)!}\right)+\cdots$ which we rewrite as $\displaystyle \cdots+\left(\frac{x^{2k}}{(2k)!}+\frac{x^{2k+1}}{(2k+1)!}\right)+\cdots$ which we simplify to find $\displaystyle \cdots+\left(\frac{x^{2k}(2k+1+x)}{(2k+1)!}\right)+\cdots$ which yields, again, a series with a convergence rate roughly double of the original. This yield the following code: float_x compressed_exp(float_x x, int nb_terms) { float_x s=0; float_x k_bang=1; float_x xx=1; for (int k=0,t=0;k<=nb_terms;k++,t+=2,k_bang*=(t+1)*t) { s+=( xx*(t+1+x) )/ k_bang; xx*=(x*x); // x^(2k) } return s; } Let us now compare the convergence for, say, $n=3$. We get it. Naive Compressed 1 1 4 2 4 13 3 8.5 18.4 4 13 19.84642857142857 5 16.375 20.06339285714286 6 18.4 20.08410308441558 7 19.4125 20.08546859390609 8 19.84642857142857 20.08553443097081 9 20.00915178571428 20.08553685145113 10 20.06339285714285 20.08553692151767 11 20.07966517857142 20.08553692315559 12 20.08410308441558 20.08553692318715 13 20.08521256087662 20.08553692318766 14 20.08546859390609 20.08553692318767 15 20.08552345812669 20.08553692318767 16 20.08553443097081 20.08553692318767 17 20.08553648837908 20.08553692318767 18 20.08553685145113 20.08553692318767 19 20.08553691196314 20.08553692318767 20 20.08553692151767 20.08553692318767 21 20.08553692295084 20.08553692318767 22 20.08553692315558 20.08553692318767 23 20.08553692318350 20.08553692318767 24 20.08553692318715 20.08553692318767 25 20.08553692318760 20.08553692318767 26 20.08553692318765 20.08553692318767 27 20.08553692318766 20.08553692318767 28 20.08553692318766 20.08553692318767 29 20.08553692318766 20.08553692318767 30 20.08553692318766 20.08553692318767 Again, we see that the compressed series converges more rapidly. It reaches the “exact” value $20.08553692318767$ (as computed by the C stdlib exp function, the reference in this case) at iteration 14, while the naive series undershoots a bit, even after 30 iterations. * * * Of course, if one looks at the series $\displaystyle \sum_{k=0}^\infty \frac{x^{2k}(2k+1+x)}{(2k+1)!}$, he could be hard-pressed to figure out where it comes from. If you’re going to use it in code, you should leave an explanatory comment, something that explains how to derive this series and why you’re using it. This also holds if you’re writing a mathematical text. The reader may not be stupid, but he is, I suppose, unlikely to figure out all the tricks you use by himself—and that’s why he’s reading you in the first place.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315019011497498, "perplexity": 2962.004821864447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587496.62/warc/CC-MAIN-20171216084601-20171216110601-00694.warc.gz"}
https://motls.blogspot.com/2006/05/new-values-of-g-and-fine-structure.html
## Wednesday, May 24, 2006 ... // ### New values of "g" and the fine-structure constant Gerry Gabrielse, an experimenter from Harvard University, and his collaborators are going to announce new, more accurate values of the fundamental constants. Using their single-electron quantum cyclotron, they can see that the new magnetic moment of the electron is • g/2 = 1.001 159 652 180 85 (76). As you can see, there are 13 significant figures or so - the value is six times more accurate than ever before. Using the cyclotron result for "g" above plus QED theorists from other universities, they can also deduce the value of the fine-structure constant. The theoretical calculation, starting with the terms • g/2 = 1 + alpha / 2 pi, requires to calculate 891 diagrams with up to four loops, and the result for the fine-structure constant • 1 / alpha = 137.035 999 710 (96) is ten times more accurate than the results from atom-recoil measurements. In fact, it is the first improvement of the accuracy in roughly 20 years. The precise value is sensitive on new physics at 130 GeV. All skillful numerologists are welcome to interpret the new data. Update: Thanks to Alejandro Rivero: the correct sequence in 1/alpha is indeed "999" instead of the previous typo "997". #### snail feedback (3) : Incidentally, this progress in the measurement of g/2 can be used to do some exersice on one of the favorite themes of Lubos' blog: frequentist versus bayesian. Remember that the notation .xxxxXX (yy) in the results is supossed to mean that the central value of the measurement is .xxxxXX and the gaussian width is two times .0000yy Now, 1) if a measurement was .xxxxXXX (yyy), calculate the probability of a new measurement .zzzzzzzqq (QQ). For instance take the old measurement of g/2 to calculate the probability of the new one 2) If a theoretical result was inside the 50% interval of the old measurement, which is the probability for it to be still in the 50% interval of the new one? 3) suggest your own question and solve it http://arxiv.org/abs/0712.2607 corrects the estimate for the four loops and then uses new measurement of Gabrielse,http://arxiv.org/abs/arXiv:0801.1134v1 so that know the fine structure constant is at 1/137.035 999 084 (51) I guess they suspected someone was wrong when they compared 999 710 (96) to the old 999 108 (450); the result was out for almost 2 sigma. Now on the contrary the new result is astonishingly near of the center of the oldest. Perhaps the error estimate is conservative, or perhaps they are fine tuning the math, knowing in advance the new results of Gabrielse? In any case here you have: 1/137.035 999 084 (51)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.818550705909729, "perplexity": 1116.2402640211603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00147.warc.gz"}
https://scholarship.rice.edu/browse?value=Durham%2C+Mark+Allen&type=author
Now showing items 1-1 of 1 • #### A semiclassical model of Rydberg atom collisions  (1991) A semiclassical model of Rydberg atom collisions with neutral molecules that result in electron attachment has been devised and implemented as a Monte Carlo computer program. Comparisons between model calculations and ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325586557388306, "perplexity": 1693.0981876455742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.hackmath.net/en/math-problem/6862
# 3d printer 3D printing ABS filament with diameter 1.75 mm has density 1.04 g/cm3. Find the length of m = 5 kg spool filament. (how to calculate length) l =  1998.8062 m ### Step-by-step explanation: Did you find an error or inaccuracy? Feel free to write us. Thank you! Tips to related online calculators Do you want to convert length units? Do you know the volume and unit volume, and want to convert volume units? Tip: Our Density units converter will help you with the conversion of density units. Do you want to convert mass units? #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Related math problems and questions: • Plastic pipe Calculate the plastic pipe's weight with diameter d = 70 mm and length 380 cm if the wall thickness is 4 mm and the density of plastic is 1367 kg/m3. • Cu wire Copper wire has a length l = 820 m and diameter d = 10 mm. Calculate the weight if density of copper is ρ = 8500 kg/m3. Result round to one decimal place. • Aluminum wire Aluminum wire of 3 mm diameter has a total weight of 1909 kg and a density of 2700 kg/m3. How long is the wire bundle? • Iron sphere Iron sphere has weight 100 kg and density ρ = 7600 kg/m3. Calculate the volume, surface, and diameter of the sphere. • The copper wire The copper wire bundle with a diameter of 2.8mm has a weight of 5kg. How many meters of wire is bundled if 1m3 of copper weighs 8930kg? • The coil How many ropes (the diameter 8 mm) fit on the coil (threads are wrapped close together) The coil has dimension: the inner diameter 400mm, the outside diameter 800mm and the length of the coil is 470mm • The square The square oak board (with density ρ = 700 kg/m3) has a side length of 50 cm and a thickness of 30 mm. 4 holes with a diameter of 40 mm are drilled into the board. What is the weight of the board? • Steel tube The steel tube has an inner diameter of 4 cm and an outer diameter of 4.8 cm. The density of the steel is 7800 kg/m3. Calculate its length if it weighs 15 kg. • Copper Cu wire Copper wire with a diameter of 1 mm and a weight of 350 g is wound on a spool. Calculate its length if the copper density is p = 8.9 g/cm cubic. • The factory The factory ordered 500 hexagonal steel bars of square section with 25 mm side. How many cars with a load capacity of 3 tonnes will be needed for bars move if the steel density is 7,850 kg.m-3? • Iron density Calculate the weight of a 2 m long rail pipe with an internal diameter of 10 cm and a wall thickness of 3 mm. The iron density is p = 7.8 g/cm3. • Copper wire What is the mass of 500 m of copper wire with a diameter of 1 mm, if ρ = 8.9 g/cm ^ 3? • Costume Denisa is preparing for a goldsmith's costume carnival. During the preparations, she thought she would let her hair wipe instead - she would apply a 5 μm thick layer of gold to each hair. How much gold would Denisa need? Assume that all hundred thousand D • Barrel 3 Barrel with water weights 118 kg. When we get off 75% of water, it weights 35 kg. How many kg has an empty barrel? • Concrete hatch The concrete hatch for a round well has a diameter of 1300 mm and a thickness of 80 mm. Determine its weight in kg if the density of the concrete is 2545 kg/m3 • Cuboid 5 Calculate the mass of the cuboid with dimensions of 12 cm; 0.8 dm and 100 mm made from spruce wood (density = 550 kg/m3). • Height as diameter of base The rotary cylinder has a height equal to the base diameter and a surface of 471 cm2. Calculate the volume of a cylinder.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450872898101807, "perplexity": 1778.899009115731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00413.warc.gz"}
http://science.sciencemag.org/content/207/4429/415
Reports # Saturn's Magnetosphere, Rings, and Inner Satellites See allHide authors and affiliations Science  25 Jan 1980: Vol. 207, Issue 4429, pp. 415-421 DOI: 10.1126/science.207.4429.415 ## Abstract Our 31 August to 5 September 1979 observations together with those of the other Pioneer 11 investigators provide the first credible discovery of the magnetosphere of Saturn and many detailed characteristics thereof. In physical dimensions and energetic charged particle population, Saturn's magnetosphere is intermediate between those of Earth and Jupiter. In terms of planetary radii, the scale of Saturn's magnetosphere more nearly resembles that of Earth and there is much less inflation by entrapped plasma than in the case at Jupiter. The orbit of Titan lies in the outer fringes of the magnetosphere. Particle angular distributions on the inbound leg of the trajectory (sunward side) have a complex pattern but are everywhere consistent with a dipolar magnetic field approximately perpendicular to the planet's equator. On the outbound leg (dawnside) there are marked departures from this situation outside of 7 Saturn radii (Rs), suggesting an equatorial current sheet having both longitudinal and radial components. The particulate rings and inner satellites have a profound effect on the distribution of energetic particles. We find (i) clear absorption signatures of Dione and Mimas; (ii) a broad absorption region encompassing the orbital radii of Tethys and Enceladus but probably attributable, at least in part, to plasma physical effects; (iii) no evidence for Janus (1966 S 1) (S 10) at or near 2.66 Rs; (iv) a satellite of diameter ≳ 170 kilometers at 2.534 Rs (1979 S 2), probably the same object as that detected optically by Pioneer 11 (1979 S 1) and previously by groundbased telescopes (1966 S 2) (S 11); (v) a satellite of comparable diameter at 2.343 Rs (1979 S 5); (vi) confirmation of the F ring between 2.336 and 2.371 Rs; (vii) confirmation of the Pioneer division between 2.292 and 2.336 Rs; (viii) a suspected satellite at 2.82 Rs (1979 S 3); (ix) no clear evidence for the E ring though its influence may be obscured by stronger effects; and (x) the outer radius of the A ring at 2.292 Rs. Inside of 2.292 Rs there is a virtually total absence of magnetospheric particles and a marked reduction in cosmic-ray intensity. All distances are in units of the adopted equatorial radius of Saturn, 60,000 kilometers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099508881568909, "perplexity": 3621.9604701736257}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00139.warc.gz"}
http://math.stackexchange.com/questions/103898/need-help-with-multi-variable-calculus?answertab=active
# Need help with multi-variable calculus I have an expression of the form: $$\int\limits_{-\infty}^{\infty}\exp \left\{ \frac{-1}{2\sigma^2} (x-\mu)^2 \right\} dx = \sqrt{2\pi\sigma^2}$$ and I need to take its derivative with respect to $\sigma^2$. The right hand side seems easy enough: $$\frac{\partial}{\partial\sigma^2}{(2\pi\sigma^2)}^{\frac{1}{2}} = \frac{1}{2}\sqrt{2\pi}(\sigma^2)^{\frac{-1}{2}},$$ right? What about the left hand side? I don't know how to take the derivative of something that's inside an integral -- can someone please help? I'm more interested in understanding how it would work then just the answer itself. EDIT: corrected RHS. EDIT 2: Through applying the chain rule, the LHS becomes: $$\int\limits_{-\infty}^\infty \frac{1}{2}(x-\mu)^2\frac{1}{\sigma^4} \exp{\lbrace \frac{-1}{2\sigma^2} (x-\mu)^2 \rbrace } dx$$ Does this look right? I'm unsure of how to handle the derivative of a function w.r.t to a squared variable (e.g. $\frac{\partial}{\partial\sigma^2}$ as opposed to $\frac{\partial}{\partial\alpha}$). For example, is this true: $\frac{\partial}{\partial\sigma^2} \sigma^{-2} = -\sigma^{-4}$ ? - RHS is still not correct. You forgot to take the $1/2$ down from the exponent into the front (the power rule is $(x^n)'=nx^{n-1}$ in case you forgot). Additionally, $2\pi$ isn't part of the variable so it doesn't go under the negative exponent; it stays as-is. – anon Jan 30 '12 at 10:30 Thank you. It's been a long time since I've had to do any calculus... – misha Jan 30 '12 at 10:45 You omitted a minus sign in the argument of the exponential. – John Bentin Jan 30 '12 at 11:44 First off, the derivative you do have is incorrect. Where'd you get the extra $\pi$? Use the power rule: $$\frac{\partial}{\partial\,\sigma^2}\sqrt{2\pi\sigma^2}=\frac{1}{2}\sqrt{2\pi}(\sigma^2)^{-1/2}.$$ $$\frac{\partial}{\partial\,\sigma^2}\int\text{blah}\,dx=\int\frac{\partial}{\partial\,\sigma^2}\text{blah}\,dx.$$ One can justify this by putting the integral into the limit definition of the partial derivative and then using the linearity of integration to pull the quotient inside the $\int$. To evaluate from here you need to use the chain rule, power rule, and know the derivative of the exponential function. Good luck...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346907138824463, "perplexity": 226.2393778500853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701314683/warc/CC-MAIN-20130516104834-00014-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/why-would-his-weight-be-zero-at-the-moment-of-the-fall.870097/
# Homework Help: Why would his weight be zero at the moment of the fall? 1. May 2, 2016 ### alaa amed 1. The problem statement, all variables and given/known data A person of weight w is in an upward-moving elevator when the cable suddenly breaks. What is the person's apparent weight immediately after the elevator starts to fall? 2. May 2, 2016 ### Staff: Mentor How would you personally define the term "apparent weight?" 3. May 2, 2016 ### alaa amed I think that's the key to answering the question, though I am not sure I interpreted properly. I think it means the weight relative to the force of gravity that acts on it in particular instance. 4. May 2, 2016 ### Staff: Mentor That's not correct. It means that, if he was standing on a scale, what the scale would read (i.e., the normal force the person would be exerting on the scale, and, by Newton's 3rd law, the normal force the scale would be exerting on the person). That's the definition of his apparent weight. So, what is the normal force that the scale is exerting on the person if the elevator cable has been cut? 5. May 2, 2016 ### alaa amed I think the person and the elevator would be free falling an so there would be no contact force? 6. May 2, 2016 ### Staff: Mentor Yes. That is correct. So what does that mean regarding the "apparent weight" of the person, considering the apparent weight is equal to the contact force. 7. May 2, 2016 ### alaa amed It will be zero! Thank you so much for your help. I get it now.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686648011207581, "perplexity": 952.506630788708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00338.warc.gz"}
http://mathhelpforum.com/advanced-algebra/92529-proving-existence-unitary-operator.html
# Thread: Proving the existence of a unitary operator... 1. ## Proving the existence of a unitary operator... Let $A$ be a linear operator on a finite-dimensional unitary space $V$ such that, for every $x,y \in V$ the following implication holds: $(x \perp y) \Rightarrow (Ax \perp Ay)$. Prove that then there exists a scalar $\alpha$ and a unitary operator $U$ (where $U:V \rightarrow V$) such that $A= \alpha U$. ****** I'm aware of the following facts: if an operator $U$ is unitary, and $U:V \rightarrow V$, then for every $x, y \in V$ we have $=$ Also, on a unitary space $V$ the following is true for every $x, y \in V$: $x \perp y \Leftrightarrow =0$ And finally, if $U$ is a unitary operator, then if $\{ e_1,...e_n\}$ is an orthonormal basis for $V$, then $\{ U e_1,...U e_n\}$ is also an orthonormal basis for $V$. ****** But so far, I've had no luck in employing these to solve this problem, so would greatly appreciate your help! 2. Originally Posted by gusztav Let $A$ be a linear operator on a finite-dimensional unitary space $V$ such that, for every $x,y \in V$ the following implication holds: $(x \perp y) \Rightarrow (Ax \perp Ay)$. Prove that then there exists a scalar $\alpha$ and a unitary operator $U$ (where $U:V \rightarrow V$) such that $A= \alpha U$. ****** I'm aware of the following facts: if an operator $U$ is unitary, and $U:V \rightarrow V$, then for every $x, y \in V$ we have $=$ Also, on a unitary space $V$ the following is true for every $x, y \in V$: $x \perp y \Leftrightarrow =0$ And finally, if $U$ is a unitary operator, then if $\{ e_1,...e_n\}$ is an orthonormal basis for $V$, then $\{ U e_1,...U e_n\}$ is also an orthonormal basis for $V$. ****** But so far, I've had no luck in employing these to solve this problem, so would greatly appreciate your help! i googled "unitary vector space" to see what exactly it is and then i found out that it's just an old name for a much more familiar name Hilbert space. anyway, first it's clear that we may assume that $A \neq 0.$ let $\{e_1, e_2, \cdots , e_n \}$ be an orthonormal basis for $V.$ i claim that $||Ae_1||=||Ae_2|| = \cdots = ||Ae_n||$ : we have $=0,$ for all $i \neq j.$ thus, by the problem hypothesis, we must also have $||Ae_i||^2-||Ae_j||^2==0,$ which proves the claim. let $\alpha=||Ae_1||$ and $U=\frac{1}{\alpha}A.$ it's easy to see that $=,$ for all $x,y \in V.$ Q.E.D.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 51, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877526760101318, "perplexity": 79.30276032658931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00774.warc.gz"}
https://chasingtheinaccessible.wordpress.com/2013/05/10/dlos-i-existence-of-the-rationals/
# DLO’s I – existence of the rationals One of the very first things we learn at higher math education is the different kinds of number sets, e.g. the naturals, rationals and reals. These are however just taken for granted, and we merely assume that these actually exist and are well-defined. I’ll spend the next few blog posts to characterize the rationals and the reals from facts known from the natural numbers alone. This post will be dedicated to proving the existence of the rationals, i.e. the theorem: Theorem. There exist a countable dense linear ordering without endpoints. Proof. Define $S:=\omega\times\omega\backslash 1$ and the relation $\sim\in S\times S$ given by $(a,b)\sim(c,d)\Leftrightarrow ad=bc$. Claim 1. $\sim$ is an equivalence relation. Proof of claim. Reflexivity and symmetry follows directly from commutativity of $\cdot$ in $\left<\mathbb{N},+,\cdot\right>$. Transitivity: Assuming $(a,b)\sim(c,d)$ and $(c,d)\sim(e,f)$, meaning $ad=bc$ and $cf=de$ by definition. Now if $c=0$ then $a=e=0$ by necessity as well and we’re done. Assume thus $c\neq 0$, which then implies $adcf=bcde\Leftrightarrow afdc=bedc \stackrel{(\dagger)}{\Leftrightarrow} af=be\Leftrightarrow(a,b)\sim(e,f)$, where the assumption was used at $(\dagger)$. $\lozenge$ Now define $[(a,b)]_\sim$ for the equivalence class of $(a,b)$ and define $B:=S/\sim:=\{[(a,b)]_\sim\mid(a,b)\in S\}$. These will resemble our intuition of fractions, such that for instance 1/2=2/4, which is stated as $[(1,2)]_\sim=[(2,4)]_\sim$. We now want to define order on our rationals: Claim 2. $[(a,b)]_\sim\prec_B [(c,d)]_\sim\Leftrightarrow ad defines a relation $\prec_B$ on $B$. Proof of claim. We need to show that the relation is independent of the choice of representatives. Let thus $(a',b'),(c',d')\in S$ be such that $(a,b)\sim(a',b')$ and $(c,d)\sim(c',d')$. We have to show that $ad. Since $(a,b)\sim(a',b')$ then $ab'=ba'$ and likewise $cd'=dc'$. Then $adb'c'=bca'd'$. Now it is seen that the bi-implication holds, since e.g. assuming $ad then by necessity $a'd' since otherwise $adb'c'. $\lozenge$ Having defined the order, we show that it fits the DLO requirements and almost satisfying having no endpoints: Claim 3. $\left$ is a countable dense linear ordering with no right endpoint and $[(0,1)]_\sim$ as left endpoint. Proof of claim. Since $|B|\leq|S|\leq|\omega\times\omega|=\aleph_0$, $B$ is countable. We show that $\left$ is a DLO. Reflexivity and symmetry follows directly from commutativity of $\cdot$ in $\left<\mathbb{N},+,\cdot\right>$. Transitive: Assume $[(a,b)]_\sim\prec_B[(c,d)]_\sim$ and $[(c,d)]_\sim\prec_B[(e,f)]_\sim$, meaning $ad and $cf. Then $adcf. Linearity: Assume $[(a,b)]_\sim\npreceq_B[(c,d)]_\sim$. It has to be shown that $[(c,d)]_\sim\prec[(a,b)]_\sim$. By definition, $[(a,b)]_\sim\nprec_B[(c,d)]_\sim\Leftrightarrow ad\nless bc$ and $[(a,b)]_\sim\neq[(c,d)]_\sim$ $\Leftrightarrow ad\neq bc$. Thus by totality of <, $ad>bc\Leftrightarrow [(c,d)]_\sim\prec_B[(a,b)]_\sim$. Density: Assume $[(a,b)]_\sim\prec_B[(c,d)]_\sim\Leftrightarrow ad. Since both $2ad$ and $2bc$ are even numbers, there exist an odd number $2ad+1$ satisfying $2ad<2ad+1<2bc$. Then $2ad2bd<(2ad+1)2bd$ as well, implying $[(2ad,2bd)]_\sim\prec_B[(2ad+1,2bd)]_\sim$ and thus $[(a,b)]_\sim\prec_B[(2ad+1,2bd)]_\sim$ and likewise to show that $[(2ad+1,2bd)]_\sim\prec_B[(c,d)]_\sim$. Lastly we have to prove the facts about the endpoints. It has no right endpoint since given any $[(a,b)]_\sim$ then $[(a,b)]_\sim\prec_B[(a+1,b)]_\sim$ since $ab. Assume now that $[(0,1)]_\sim$ is not the left endpoint. Then there is $[(a,b)]_\sim$ satisfying $[(a,b)]_\sim\prec_B[(0,1)]_\sim$, meaning $1\cdot a, but $a\in\omega$, contradiction. $\lozenge$ Now define $A:=B\backslash[(0,1)]_\sim$ and let $\prec_A:=\prec_B\upharpoonright A\times A$. Following intuition, this should fix our problem with the left endpoint – and indeed, our intuition is correct, as the following claim shows: Claim 4. $\left$ is a countable dense linear ordering with no endpoints. Proof of claim. Clearly it is a DLO with no right endpoint as removing a single element doesn’t change these properties. Now given $[(a,b)]_\sim$ we have $[(a,b+1)]_\sim\prec_A[(a,b)]_\sim$ since $b, due to $a\neq 0$. $\lozenge$ Now technically we’ve proven our stated theorem, but we’d like to show our usual idea of the rationals also exists, which includes “the negative fractions”. This requires a reversed order, which motivates the following claim: Claim 5. $\left< A,\succ_A\right>$ is a countable dense linear ordering with no endpoints. Proof of claim. Only the order is reversed, so countability and DLO properties still hold. Only change is that $[(0,1)]_\sim$ is now the right endpoint instead of left, but an argument analogous to that in claim 4 shows that it has no endpoints as well. $\lozenge$ Now finally define the well-known rationals $\left<\mathbb{Q},\prec_\mathbb{Q}\right>:=\left^\frown\left$, which is the two orderings concatenated after each other (juxtaposition). We show the final claim that this in fact fits the requirement: Claim 6. $\left<\mathbb{Q},\prec_\mathbb{Q}\right>$ is a countable dense linear ordering with no endpoints. Proof of claim. Since $|\omega\cup\omega|=\aleph_0$, $\mathbb{Q}$ is countable due to $|A\cup B|\leq|\omega\cup\omega|$. Since $\left$ is a DLO without endpoints by claim 5 and $\left$ is a DLO without a right endpoint and $[(0,1)]_\sim$ as left endpoint by claim 3, then $\left<\mathbb{Q},\prec_\mathbb{Q}\right>$ has no endpoints and is a DLO as well. $\lozenge$ $\blacksquare$ And there we have it! The rationals as we know them along with their well-known properties actually exist. But the question is now if there are different “kinds” of rationals, or are they in fact unique? We already saw another DLO without endpoints in claim 4 of the proof, but is this version equivalent to our rationals? This will be the focus of the next post!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 93, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792012572288513, "perplexity": 361.87158915485827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00017.warc.gz"}
http://en.wikipedia.org/wiki/Vector_notation
# Vector notation Vector notation Vector notation,[1][2][3] this page gives an overview of the commonly used mathematical notation when working with mathematical vectors,[4] which may be geometric vectors or abstract members of vector spaces. For representing a vector,[5][6] the common typographic convention is upright boldface type, as in $\mathbf{v}$ for a vector named ‘v’. In handwriting, where boldface type is either unavailable or unwieldy, vectors are often represented with right-pointing arrow notation or harpoons above their names, as in $\vec{v}$. Shorthand notations include tildes and straight lines placed above or below the name of a vector. Between 1880 and 1887, Oliver Heaviside developed the operational calculus,[7][8] a method of solving differential equations by transforming them into ordinary algebraic equations which caused much controversy when introduced because of the lack of rigour in its derivation.[9] After the turn of the 20th century, Josiah Willard Gibbs would in physical chemistry supply notation for the scalar product and vector products, which was introduced in Vector Analysis. ## Rectangular vectors Rectangle Rectangular cuboid A rectangular vector is a coordinate vector specified by components that define a rectangle (or rectangular prism in three dimensions, and similar shapes in greater dimensions). The starting point and terminal point of the vector lie at opposite ends of the rectangle (or prism, etc.). ### Ordered set notation A rectangular vector in $\mathbb{R}^n$ can be specified using an ordered set of components, enclosed in either parentheses or angle brackets. In a general sense, an n-dimensional vector v can be specified in either of the following forms: • $\mathbf{v} = (v_1, v_2, \dots, v_{n - 1}, v_n)$ • $\mathbf{v} = \langle v_1, v_2, \dots, v_{n - 1}, v_n \rangle$ Where v1, v2, …, vn − 1, vn are the components of v. ### Matrix notation A rectangular vector in $\mathbb{R}^n$ can also be specified as a row or column matrix containing the ordered set of components. A vector specified as a row matrix is known as a row vector; one specified as a column matrix is known as a column vector. Again, an n-dimensional vector $\mathbf{v}$ can be specified in either of the following forms using matrices: • $\mathbf{v} = \left[ \begin{matrix} v_1 & v_2 & \cdots & v_{n - 1} & v_n \end{matrix} \right] = \left( \begin{matrix} v_1 & v_2 & \cdots & v_{n - 1} & v_n \end{matrix} \right)$ • $\mathbf{v} = \left[ \begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_{n - 1} \\ v_n \end{matrix} \right]= \left( \begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_{n - 1} \\ v_n \end{matrix} \right)$ Where v1, v2, …, vn − 1, vn are the components of v. In some advanced contexts, a row and a column vector have different meaning; see covariance and contravariance of vectors. ### Unit vector notation A rectangular vector in $\mathbb{R}^3$ (or fewer dimensions, such as $\mathbb{R}^2$ where vz below is zero) can be specified as the sum of the scalar multiples of the components of the vector with the members of the standard basis in $\mathbb{R}^3$. The basis is represented with the unit vectors $\boldsymbol{\hat{\imath}} = (1, 0, 0)$, $\boldsymbol{\hat{\jmath}} = (0, 1, 0)$, and $\boldsymbol{\hat{k}} = (0, 0, 1)$. A three-dimensional vector v can be specified in the following form, using unit vector notation: • $\mathbf{v} = v_x \boldsymbol{\hat{\imath}} + v_y \boldsymbol{\hat{\jmath}} + v_z \boldsymbol{\hat{k}}$ Where vx, vy, and vz are the magnitudes of the components of v. ## Polar vectors Points in the polar coordinate system with pole O and polar axis L. In green, the point with radial coordinate 3 and angular coordinate 60 degrees, or (3,60°). In blue, the point (4,210°). A polar vector is a vector in two dimensions specified as a magnitude (or length) and a direction (or angle). It is akin to an arrow in the polar coordinate system. The magnitude, typically represented as r, is the length from the starting point of the vector to its endpoint. The angle, typically represented as θ (the Greek letter theta), is measured as the offset from the horizontal (or a line collinear with the x-axis in the positive direction). The angle is typically reduced to lie within the range $0 \le \theta < 2\pi$ radians or $0 \le \theta < 360^{\circ}$. ### Ordered set and matrix notations Polar vectors can be specified using either ordered pair notation (a subset of ordered set notation using only two components) or matrix notation, as with rectangular vectors. In these forms, the first component of the vector is r (instead of v1) and the second component is θ (instead of v2). To differentiate polar vectors from rectangular vectors, the angle may be prefixed with the angle symbol, $\angle$. A two-dimensional polar vector v can be represented as any of the following, using either ordered pair or matrix notation: • $\mathbf{v} = (r, \angle \theta)$ • $\mathbf{v} = \langle r, \angle \theta \rangle$ • $\mathbf{v} = \left[ \begin{matrix} r & \angle \theta \end{matrix} \right]$ • $\mathbf{v} = \left[ \begin{matrix} r \\ \angle \theta \end{matrix} \right]$ Where r is the magnitude, θ is the angle, and the angle symbol ($\angle$) is optional. ### Direct notation Polar vectors can also be specified using simplified autonomous equations that define r and θ explicitly. This can be unwieldy, but is useful for avoiding the confusion with two-dimensional rectangular vectors that arises from using ordered pair or matrix notation. A two-dimensional vector whose magnitude is 5 units and whose direction is π/9 radians (20°) can be specified using either of the following forms: • $r=5, \ \theta={\pi \over 9}$ • $r=5, \ \theta=20^{\circ}$ ## Cylindrical vectors A cylindrical coordinate system with origin O, polar axis A, and longitudinal axis L. The dot is the point with radial distance ρ = 4, angular coordinate φ = 130°, and height z = 4. A cylindrical vector is an extension of the concept of polar vectors into three dimensions. It is akin to an arrow in the cylindrical coordinate system. A cylindrical vector is specified by a distance in the xy-plane, an angle, and a distance from the xy-plane (a height). The first distance, usually represented as r or ρ (the Greek letter rho), is the magnitude of the projection of the vector onto the xy-plane. The angle, usually represented as θ or φ (the Greek letter phi), is measured as the offset from the line collinear with the x-axis in the positive direction; the angle is typically reduced to lie within the range $0 \le \theta < 2\pi$. The second distance, usually represented as h or z, is the distance from the xy-plane to the endpoint of the vector. ### Ordered set and matrix notations Cylindrical vectors are specified like polar vectors, where the second distance component is concatenated as a third component to form ordered triplets (again, a subset of ordered set notation) and matrices. The angle may be prefixed with the angle symbol ($\angle$); the distance-angle-distance combination distinguishes cylindrical vectors in this notation from spherical vectors in similar notation. A three-dimensional cylindrical vector v can be represented as any of the following, using either ordered triplet or matrix notation: • $\mathbf{v} = (r, \angle \theta, h)$ • $\mathbf{v} = \langle r, \angle \theta, h \rangle$ • $\mathbf{v} = \left[ \begin{matrix} r & \angle \theta & h \end{matrix} \right]$ • $\mathbf{v} = \left[ \begin{matrix} r \\ \angle \theta \\ h \end{matrix} \right]$ Where r is the magnitude of the projection of v onto the xy-plane, θ is the angle between the positive x-axis and v, and h is the height from the xy-plane to the endpoint of v. Again, the angle symbol ($\angle$) is optional. ### Direct notation A cylindrical vector can also be specified directly, using simplified autonomous equations that define r (or ρ), θ (or φ), and h (or z). Consistency should be used when choosing the names to use for the variables; ρ should not be mixed with θ and so on. A three-dimensional vector, the magnitude of whose projection onto the xy-plane is 5 units, whose angle from the positive x-axis is π/9 radians (20°), and whose height from the xy-plane is 3 units can be specified in any of the following forms: • $r=5, \ \theta={\pi \over 9}, \ h=3$ • $r=5, \ \theta=20^{\circ}, \ h=3$ • $\rho=5, \ \phi={\pi \over 9}, \ z=3$ • $\rho=5, \ \phi=20^{\circ}, \ z=3$ ## Spherical vectors Spherical coordinates (r, θ, φ) as often used in mathematics: radial distance r, azimuthal angle θ, and polar angle φ. The meanings of θ and φ have been swapped compared to the physics convention. A spherical vector is another method for extending the concept of polar vectors into three dimensions. It is akin to an arrow in the spherical coordinate system. A spherical vector is specified by a magnitude, an azimuth angle, and a zenith angle. The magnitude is usually represented as ρ. The azimuth angle, usually represented as θ, is the offset from the line collinear with the x-axis in the positive direction. The zenith angle, usually represented as φ, is the offset from the line collinear with the z-axis in the positive direction. Both angles are typically reduced to lie within the range from zero (inclusive) to 2π (exclusive). ### Ordered set and matrix notations Spherical vectors are specified like polar vectors, where the zenith angle is concatenated as a third component to form ordered triplets and matrices. The azimuth and zenith angles may be both prefixed with the angle symbol ($\angle$); the prefix should be used consistently to produce the distance-angle-angle combination that distinguishes spherical vectors from cylindrical ones. A three-dimensional spherical vector v can be represented as any of the following, using either ordered triplet or matrix notation: • $\mathbf{v} = (\rho, \angle \theta, \angle \phi)$ • $\mathbf{v} = \langle \rho, \angle \theta, \angle \phi \rangle$ • $\mathbf{v} = \left[ \begin{matrix} \rho & \angle \theta & \angle \phi \end{matrix} \right]$ • $\mathbf{v} = \left[ \begin{matrix} \rho \\ \angle \theta \\ \angle \phi \end{matrix} \right]$ Where ρ is the magnitude, θ is the azimuth angle, and φ is the zenith angle. ### Direct notation Like polar and cylindrical vectors, spherical vectors can be specified using simplified autonomous equations, in this case for ρ, θ, and φ. A three-dimensional vector whose magnitude is 5 units, whose azimuth angle is π/9 radians (20°), and whose zenith angle is π/4 radians (45°) can be specified as: • $\rho=5, \ \theta={\pi \over 9}, \ \phi={\pi \over 4}$ • $\rho=5, \ \theta=20^{\circ}, \ \phi=45^{\circ}$ ## Operations In any given vector space, the operations of vector addition and scalar multiplication are defined. Normed vector spaces also define an operation known as the norm (or determination of magnitude). Inner product spaces also define an operation known as the inner product. In $\mathbb{R}^n$, the inner product is known as the dot product. In $\mathbb{R}^3$ and $\mathbb{R}^7$, an additional operation known as the cross product is also defined. Vector addition is represented with the plus sign used as an operator between two vectors. The sum of two vectors u and v would be represented as: $\mathbf{u} + \mathbf{v}$ ### Scalar multiplication Scalar multiplication is represented in the same manners as algebraic multiplication. A scalar beside a vector (either or both of which may be in parentheses) implies scalar multiplication. The two common operators, a dot and a rotated cross, are also acceptable (although the rotated cross is almost never used), but they risk confusion with dot products and cross products, which operate on two vectors. The product of a scalar c with a vector v can be represented in any of the following fashions: • $c \mathbf{v}$ • $c \cdot \mathbf{v}$ • $c \times \mathbf{v}$ #### Vector subtraction and scalar division Using the algebraic properties of subtraction and division, along with scalar multiplication, it is also possible to “subtract” two vectors and “divide” a vector by a scalar. Vector subtraction is performed by adding the scalar multiple of −1 with the second vector operand to the first vector operand. This can be represented by the use of the minus sign as an operator. The difference between two vectors u and v can be represented in either of the following fashions: • $\mathbf{u} + -\mathbf{v}$ • $\mathbf{u} - \mathbf{v}$ Scalar division is performed by multiplying the vector operand with the numeric inverse of the scalar operand. This can be represented by the use of the fraction bar or division signs as operators. The quotient of a vector v and a scalar c can be represented in any of the following forms: • ${1 \over c} \mathbf{v}$ • ${\mathbf{v} \over c}$ • ${\mathbf{v} \div c}$ ### Norm The norm of a vector is represented with double bars on both sides of the vector. The norm of a vector v can be represented as: $\|\mathbf{v}\|$ The norm is also sometimes represented with single bars, like $|\mathbf{v}|$, but this can be confused with absolute value (which is a type of norm). ### Inner product The inner product (also known as the scalar product, not to be confused with scalar multiplication) of two vectors is represented as an ordered pair enclosed in angle brackets. The inner product of two vectors u and v would be represented as: $\langle \mathbf{u}, \mathbf{v} \rangle$ #### Dot product In $\mathbb{R}^n$, the inner product is also known as the dot product. In addition to the standard inner product notation, the dot product notation (using the dot as an operator) can also be used (and is more common). The dot product of two vectors u and v can be represented as: $\mathbf{u} \cdot \mathbf{v}$ In some older literature, the dot product is implied between two vectors written side-by-side. This notation can be confused with the dyadic product between two vectors. ### Cross product The cross product of two vectors (in $\mathbb{R}^3$) is represented using the rotated cross as an operator. The cross product of two vectors u and v would be represented as: $\mathbf{u} \times \mathbf{v}$ In some older literature, the following notation is used for the cross product between u and v: $[\mathbf{u},\mathbf{v}]$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 64, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849202036857605, "perplexity": 525.6274607939454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826679.55/warc/CC-MAIN-20140820021346-00433-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-mass-of-the-cup.199353/
# What is the mass of the cup? 1. Nov 19, 2007 ### jincy34 1. The problem statement, all variables and given/known data A 105.05 g ice cube at -15°C is placed in an aluminum cup whose initial temperature is 73°C. The system comes to an equilibrium temperature of 24°C. What is the mass of the cup? 2. Relevant equations Q=mc (delta T) 3. The attempt at a solution the sum of total heat in the system is equal to zero. Qice+Qaluminium cup=0 MiceCice(delta T)+MaluminiumCaluminium(delta T)=0 I solved for mass of aluminium cup, and got 0.1951 kg. But, it is wrong. 2. Nov 19, 2007 ### hage567 Since the ice melts, I think you need to take into account the heat of fusion, Q=mL, where m is the mass of the ice and L is the latent heat of fusion. Try that. 3. Nov 19, 2007 ### jincy34 I did, and got 0.8633 kg. But, that is wrong too. 4. Nov 19, 2007 ### hage567 You need to include terms for: the heat required to bring the ice from -15 degrees to 0 degrees the heat of fusion to melt the ice at 0 degrees to water at 0 degrees the heat required to bring the water at 0 degrees to water at the final temperature of 24 degrees This will be equal to the heat the aluminum cup loses. 5. Nov 19, 2007 ### jincy34 Thank you very much. Similar Discussions: What is the mass of the cup?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8748007416725159, "perplexity": 1341.5256719895822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00032.warc.gz"}
https://edurev.in/studytube/Physics-CBSE-Sample-Question-Paper--2020-21--1/7a5d4f54-f30b-472a-b747-91bf1ee6d8ba_t
Courses Physics: CBSE Sample Question Paper (2020-21)- 1 Notes | EduRev Class 12 : Physics: CBSE Sample Question Paper (2020-21)- 1 Notes | EduRev The document Physics: CBSE Sample Question Paper (2020-21)- 1 Notes | EduRev is a part of the Class 12 Course Sample Papers for Class 12 Medical and Non-Medical. All you need of Class 12 at this link: Class 12 Class-XII Physics Theory TIME: 3 Hrs. M.M: 70 General Instructions: (1) All questions are compulsory. There are 33 questions in all. (2) This question paper has five sections: Section A, Section B, Section C, Section D and Section E. (3) Section A contains ten very short answer questions and four assertion reasoning MCQs of 1 mark each, Section B has two case-based questions of 4 marks each, Section C contains nine short answer questions of 2 marks each, Section D contains five short answer questions of 3 marks each and Section E contains three long answer questions of 5 marks each. (4) There is no overall choice. However, an internal choice is provided. You have to attempt only one of the choices in such questions. Section A All questions are compulsory. In the case of internal choices, attempt any one of them. Q.1. The figure shows electric field lines in which an electric dipole P is placed as shown. In which direction, the dipole will experience a force? (1 Mark) Ans: We know the electric field emerges radially outward from the positive point charge. In the figure given above, space between field lines increases (or the density of the electric field line is decreasing). In other words, the electric force is decreasing while moving from left to right. Thus, the force on charge – q is greater than the force on charge + q in turn, a dipole will experience a force towards the left direction. Q.2. A capacitor of 4 μF is connected, as shown in the figure. The internal resistance of the battery is 0.5 Ω. Find the amount of charge on the capacitor plates. (1 Mark) Ans: As the capacitor offers infinite resistance for the DC circuit. The cells current will not flow across the branch of 4 mF, and 10 W. So current will flow across 2-ohm branches. So current flows through across 2 W resistance from left to right is, So Potential Difference (PD) across 2 W resistance V = RI = 2 × 1 = 2 Volt As battery, capacitor and 2 branches are in parallel. So PD will remain the same across all three branches. As current does not flow through the capacitor branch, so no potential drop will be across 10 Ω. So PD across 4 μF capacitor = 2 Volt Q = CV = 4 μF× 2 V = 8 μC. OR What is the value of the phase difference between two points on the same wavefront?   (1 Mark) Ans: Zero Detailed answer: The phase difference between two points on a wavefront is zero. The phase difference is defined as the difference in the phase angle between the two waves. Q.3. The output of a step-down transformer is measured to be 24 V when connected to a 12 W Light bulb. Find the value of the peak current. (1 Mark) Ans: Given, Power associated with secondary, Ps = 12 W Secondary voltage, Vs = 24 V Current in the secondary, I= = 0.5 A Peak value of the current in the secondary, A. Q.4. Two charged particles traverse identical helical paths in an opposite sense in a uniform magnetic field . What will be their charge to the mass ratio? (1 Mark) Ans: When the charge/mass ratio of these two particles are the same and charges on them are of opposite nature, then the charged particles will traverse identical helical paths in a opposite sense. OR Name the phenomenon which shows the quantum nature of electromagnetic radiation.   (1 Mark) Ans: Photoelectric effect and the Compton effect. Q.5. If the number of turns per unit length of a coil of a solenoid is doubled, how will its self-induction change?  (1 Mark) Ans: The self-inductance of coil or solenoid is given by where μ0 is the permeability of free space, n is the number of turns per unit length of a coil, A is the cross-sectional area of the coil, and l is the length of the coil. Q.6. What do you mean by the sensitivity of the meter bridge?  (1 Mark) Ans: The meter bridge is more sensitive when all the resistances are in the same order or their ratio is unity. Q.7. A ray of light incident at an angle q on a refracting face of a prism emerges from the other face normally. If the angle of the prism is 5° and the prism is made of a material of refractive index 1.5, what is the angle of incidence?  (1 Mark) Ans: Given that, A = 5° μ = 1.5 i= 0° r2 = 0° As we know, Since, From Snell’s law : ∴ i1 angle of incidence = 7.5°. OR What is the maximum number of spectral lines emitted by a hydrogen atom in the third excited state?  (1 Mark) Ans: If n is the quantum number of the highest energy level, then the total number of possible spectral lines emitted is Here, third excited state means fourth energy level, i.e., n = 4 ∴ N = 4 Q.8. A cylindrical bar magnet is rotated about its axis in the figure. A wire is connected from the axis and is made to touch the cylindrical surface through contact. Then, find the amount of current flowing through the ammeter. (1 Mark) Ans: The phenomenon of electromagnetic induction is used in this problem. Whenever the number of magnetic lines of force (magnetic flux) passing through a circuit changes (or a moving conductor cuts the magnetic flux), an emf is produced in the circuit (or emf induces across the ends of the conductor), is called induced emf. The induced emf persists only as long as there is a change or cutting of flux. When a cylindrical bar magnet is rotated about its axis, no change in flux is linked with the circuit. Consequently, no emf induces, and hence, no current flows through the ammeter A. Hence the ammeter shows no deflection. OR Find the ratio of de-Broglie wavelengths associated with two electrons accelerated through 25 V and 36 V.  (1 Mark) Ans: V1 = 25 V, V2 = 36 V the de-Broglie wavelength of an electron : Q.9. Name the phenomenon which shows the quantum nature of electromagnetic Ans: Photoelectric Effect (Raman Effect/Compton Effect) Q.10. Name the minority charge carriers in n-type silicon.  (1 Mark) Ans: Holes are the minority carrier in n-type Silicon. For question numbers 11, 12, 13 and 14, two statements are given-one labelled Assertion (A) and the other labelled Reason (R). Select the correct answer to these questions from the codes (a), (b), (c), and (d) as given below: Q.11. Assertion (A): It is impossible to use 35Cl for fusion. Reason (R): Binding energy of 35Cl is minimal.   (1 mark) (a) Both A and R are true, and R is the correct explanation of A (b) Both A and R are true, but R is NOT the correct explanation of A (c) A is true, but R is false (d) A is false, and R is also false Ans: Correct option is (c) Explanation: Only lighter atoms are used for fusion. So, 35Cl cannot be used for Fusion. So the assertion is true. The binding energy of Chlorine binding energy is large. So the reason is false. Q.12. Assertion(A): If an electron is not deflected when moving through a certain region of space, the only possibility is that no magnetic field is present. Reason (R): Force on the electron is directly proportional to the strength of the magnetic field  (1 mark) (a) Both A and R are true, and R is the correct explanation of A (b) Both A and R are true, but R is NOT the correct explanation of A (c) A is true, but R is false (d) A is false, and R is also false Ans: Correct option is (a) Explanation: In the absence of a magnetic field, moving electron will not be deflected. This possibility is true. So, the assertion is true. So, the electron's force is directly proportional to the magnetic field's strength. So, the reason is true. Reason properly explain the assertion. Q.13. Assertion (A): When the magnetic flux changes around a metallic conductor, the eddy current is produced. Reason (R): Electric potential determines the flow of charge.  (1 mark) (a) Both A and R are true, and R is the correct explanation of A (b) Both A and R are true, but R is NOT the correct explanation of A (c) A is true, but R is false (d) A is false, and R is also false Ans: Correct option is (b) Explanation: Change in flux induces emf in the conductor, which generates eddy current. So the assertion is true. Electric potential determines the flow of charge. So, the reason is also true. But the reason is not the proper explanation of the generation of eddy current. Q.14. Assertion (A): Magnetic poles cannot be separated by breaking a bar magnet into two pieces. Reason (R): When a magnet is broken into two pieces, the magnetic moment will be reduced to half.  (1 mark) (a) Both A and R are true, and R is the correct explanation of A (b) Both A and R are true, but R is NOT the correct explanation of A (c) A is true, but R is false (d) A is false, and R is also false Ans: Correct option is (b) Explanation: Magnetic poles always exist in pairs, even at the atomic level. So the assertion is true. When a magnet is broken into two pieces, the pole strength remains the same; only the length becomes half. So, the magnetic moment becomes half. So, the reason is also true. But R is not the proper explanation of A. Section B Questions 15 and 16 are Case Study based questions and are compulsory. Attempt any 4 subparts from each question. Each question carries 1 mark. Q.15. Faster and smaller: The future of computer technology The Integrated Chip (IC) is at the heart of all computer systems. In fact, ICs are found in almost all electrical devices like cars, televisions, CD players, cell phones etc. The miniaturisation that made the modern personal computer possible could never have happened without the IC. IC's are electronic devices that contain many transistors, resistors, capacitors, connecting wires – all in one package. You must have heard of the microprocessor. The microprocessor is an IC that processes all information in a computer, like keeping track of what keys are pressed, running programmes, games etc. The IC was first invented by Jack Kilby at Texas Instruments in 1958, and he was awarded Nobel Prize for this in 2000. IC's are produced on a piece of semiconductor crystal (or chip) by a photolithography process. Thus, the entire Information Technology (IT) industry hinges on semiconductors. Over the years, the complexity of ICs has increased while the size of its features continued to shrink. In the past five decades, dramatic miniaturisation in computer technology has made modern-day computers faster and smaller. In the 1970s, Gordon Moore, co-founder of INTEL, pointed out that the memory capacity of a chip (IC) approximately doubled every one and a half years. This is popularly known as Moore’s law. The number of transistors per chip has risen exponentially, and each year, computers are becoming more powerful yet cheaper than the year before. It is intimated from current trends that the computers available in 2020 will operate at 40 GHz (40,000 MHz) and would be much smaller, more efficient and less expensive than present-day computers. A famous quote from Gordon Moore best expresses the explosive growth in the semiconductor industry and computer technology: “If the auto industry advanced as rapidly as the semiconductor industry, a Rolls Royce would get half a million miles per gallon, and it would be cheaper to throw it away than to park it”. 1. Full form of IC is: (1 mark) (a) Indigenous circuit (b) Improved chip (c) Isolated circuit (d) Integrated chip Ans: Correct option is (d) Explanation: The full form of IC is an integrated chip or Integrated Circuit. 2. IC was first invented by (1 mark) (a) Isaac Newton (b) W. H. Schottky (c) Charles Babbage (d) Jack Kilky Ans: Correct option is (d) Explanation: The IC was first invented by Jack Kilby at Texas Instruments in 1958, and he was awarded Nobel Prize for this in 2000. 3. Moor’s Law states that: (1 mark) (a) The memory capacity of a chip (IC) approximately doubles every one and a half years (b) The packing density doubles every year. (c) The memory capacity of a chip (IC) approximately doubles every two and a half years (d) Operational frequency of computer doubles every one and a half years. Ans: Correct option is (a) Explanation: In the past five decades, dramatic miniaturisation in computer technology has made modern-day computers faster and smaller. In the 1970s, Gordon Moore, co-founder of INTEL, pointed out that a chip (IC) memory capacity approximately doubled every one and a half years. This is popularly known as Moore’s law. 4. Which statement is correct?  (1 mark) (a) IC's contain many transistors, resistors, capacitors, connecting wires. (b) IC's contain many transistors, resistors, capacitors, inductors, connecting wires (c) IC's contain many transistors, capacitors, inductors, connecting wires. (d) IC's contain many transistors, resistors, capacitors, crystal oscillator, connecting wires. Ans: Correct option is (a) Explanation: IC's are electronic devices that contain many transistors, resistors, capacitors, connecting wires – all in one package. It does not contain any inductor or crystal oscillator. 5. IC's are produced on a piece of semiconductor crystal by a process called (1 mark) (a) X-ray imaging (b) Ultrasonography (c) Magnetic resonance imaging (d) photolithography Ans: Correct option is (d) Explanation: IC's are produced on a piece of semiconductor crystal (or chip) by a photolithography process. Q.16. Photocell: A photocell is a technological application of the photoelectric effect. It is a device whose electrical properties are affected by light. It is also sometimes called an electric eye. A photocell consists of a semi-cylindrical photo-sensitive metal plate C (emitter) and a wire loop A (collector) supported in an evacuated glass or quartz bulb. It is connected to the external circuit having a high-tension battery B and microammeter (μA) as shown in the Figure. Sometimes, instead of plate C, a thin layer of photosensitive material is pasted on the inside of the bulb. A part of the bulb is left clean for the light to enter. When the light of suitable wavelength falls on the emitter C, photoelectrons are emitted. These photoelectrons are drawn to the collector A. Photocurrent of a few microampere order can be normally obtained from a photocell. A photocell converts a change in intensity of illumination into a photocurrent change. This current can be used to operate control systems and light measuring devices. 1. Photocell is also known as  (1 mark) (a) Electric sense (b) Electric eye (c) Photo emitter (d) Photo transducer Ans: Correct option is (b) Explanation: A photocell is a technological application of the photoelectric effect. It is a device whose electrical properties are affected by light. It is also sometimes called an electric eye. 2. A photocell consists of (1 mark) (a) a semi-cylindrical photo-sensitive metal plate called emitter and a wire loop called collector (b) A metal cylinder called emitter and a filament called the collector. (c) Two semi-cylindrical photo-sensitive metal plates – one is called emitter, and the other is called collector (d) A wire mesh called emitter and a photosensitive wire loop called collector Ans: Correct option is (a) Explanation: A photocell consists of a semi-cylindrical photosensitive metal plate C (emitter) and a wire loop A (collector) supported in an evacuated glass or quartz bulb. 3. Which of the following statements is true?  (1 mark) (a) The photocell is totally painted black (b) A part of the photocell is left clean (c) The photocell is completely transparent. (d) A part of the photocell is made black Ans: Correct option is (b) Explanation: A part of the bulb is left clean for the light to enter. 4. The photocurrent generated is in the order of  (1 mark) (a) Ampere (b) Milliampere (c) Microampere (d) None of the above Ans: Correct option is (c) Explanation: Photocurrent of the order of a few microamperes can be normally obtained from a photocell. 5. A photocell converts a change in ________ of incident light into a change in ___________.  (1 mark) (a) Intensity, photo-voltage (b) Wavelength, photo-voltage (c) Frequency, photo-current (d) Intensity, photo-current Ans: Correct option is (d) Explanation: A photocell converts a change in intensity of illumination into a photocurrent change. Section C All questions are compulsory. In the case of internal choices, attempt any one. Q.17. State the underlying principle of a transformer. How is the large scale transmission of electric energy over long distances done using transformers?   (2 Mark) Ans: A transformer is based on the principle of mutual induction, which states that due to continuous change in the current in the primary coil, an emf gets induced across the secondary coil. Electric power generated at the power station is stepped-up to very high voltages using a step-up transformer and transmitted to a distant place. At receiving end, it is stepped down by a step-down transformer. Q.18. Given a uniform electric field  N/C. Find the flux of this field through a square of a side 10 cm on a side whose plane is parallel to the y-z plane. What would be the flux through the same square if the plane makes a 30° angle with the x-axis?   (2 Mark) Ans: Given : N/C along (+) positive direction of x-axis. Surface area, A = 10 cm × 10 cm = 0.10 m × 0.10 m = 10–2 m2 (i) In case of plane parallel to y-z plane, normal to plane will be along x-axis, so ϕ = 0° Electric flux will be calculated using ϕ =cos θ 5 × 103 × 10–2 × cos 0° = 50 Nm2 /C (ii) Since the plane is making an angle of 30° with the x-axis, so normal to its plane will make 60° with the x-axis, so θ = 60° Now finding Electric flux again with ϕ =  cos θ = 5 × 103 × 10–2 × cos 60° = 25 Nm2 /C OR Why do we prefer carbon brushes to copper in an ac generator?   (2 Mark) Ans: The carbon brushes used in the generator are corrosion-free. On small expansion on heating, it maintains the proper contact as well. Q.19. If magnetic monopoles existed, how would the Gauss law of magnetism be modified?   (2 Mark) Ans: Gauss law of magnetism describes that divergence of the magnetic field will be zero while divergence of the electric field is not zero, which shows the non-existence of magnetic monopole. As per Gauss law of magnetism, If monopole exists, then the right side will be equal to the monopole multiplied by μ0. Detailed Answer: According to the Gauss law of magnetism, (Integral form) (Differential form) If magnetic monopoles exist, then Gauss's law for magnetism would be modified as : (Integral form) (Differential form) Where, ρm = magnetic charge density μ0 = permeability of free space. OR Calculate the amount of work done to dissociate a system of three charges 1 μC, 1 μC and – 4 μC placed   (2 Mark) Ans: Given : Q.20. The battery remains connected to a parallel plate capacitor, and a dielectric slab is inserted between the plates. What will be the effect on its (i) potential difference (ii) capacity (iii) electric field and (iv) energy stored   (2 Mark) Ans: When a battery remains connected, (i) the potential difference V remains constant (ii) capacity C increases (iii)the electric field will remain the same (iv) energy stored 1/2 CV2 increases as C increases Detailed Answer: When a battery remains connected to a parallel plate capacitor and if a dielectric slab is inserted between the capacitor plates, then (i) there will be no change in the potential difference as the capacitor remained connected with the battery. (ii) capacity or capacitance will increase since with the introduction of the dielectric slab, the capacitor's capacitance will result in C =  where K > 1, increasing C. (iii)Electric field will remain the same as there will be no change in potential difference and distance between the plates. (iv) Energy stored will be increased since from the expression U = 1/2 CV2, potential difference V remains the same while C increases, which finally increases the capacitor's energy. Q.21. The diagram below shows a potentiometer set up. The galvanometer pointer deflects to the left on touching the jockey near the end X of the potentiometer wire. On touching the jockey near to end Y of the potentiometer, the galvanometer pointer again deflects to the left but now by a larger amount. Identify the fault in the circuit and explain how it leads to such a one-sided deflection using appropriate equations or otherwise.   (2 Mark) Ans: The positive of E1 is not connected to terminal X. So, VG (or deflection) will be maximum when l is maximum, i.e., when a jockey is touched near and Y. Also, VG (or deflection) will be minimum when l is minimum, i.e., when a jockey is touched near end X. Q.22. Define the distance of the closest approach. An α-particle of kinetic energy ‘K’ is bombarded on a thin gold foil. The distance of the closest approach is ‘r’. What will be the distance of the closest approach for an α-particle of double the kinetic energy?   (2 Mark) Ans: It is the distance of the charged particle from the centre of the nucleus, at which the whole of the initial kinetic energy of the (far off) charged particle gets converted into the electric potential energy of the system. 1 Distance of closest approach (rc) is given by rc 'K' is doubled, ∴ rbecomes r/2. [Alternatively: If a candidate writes directly r/2 without mentioning formula, award the 1 mark for this part.] Detailed Answer: When an α-particle is bombarded towards the nucleus, it is repelled by electrostatic repulsion. As a result, its kinetic energy is converted into electrostatic potential energy. At a certain distance (rC) between the ∝particle and nucleus, the moving particle loses all its kinetic energy and becomes stationary momentarily. This distance is known as the distance of the closest approach. In this process, the particle's total kinetic energy is converted into potential energy. Kinetic energy = K = Let rC’ be the new distance of closest approach when kinetic energy becomes 2K Q.23. Write two important limitations of Rutherford's nuclear model of the atom.   (2 Mark) Ans: (i) According to Rutherford's model, an electron orbiting around the nucleus continuously radiates energy due to the acceleration; hence the atom will not remain stable. (ii) As electron spirals inwards, its angular velocity and frequency change continuously; therefore, it will emit a continuous spectrum. Q.24. Calculate the curvature radius of an equi-concave lens of refractive index 1.5, when kept in a medium of refractive index 1.4, to have a power of –5D.  (2 Mark) Ans: Calculation of focal length Lens maker's formula Detailed Answer: Given the refractive index of the biconcave lens = 1.5 Power of biconcave lens = – 5D Refractive index of medium= 1.4. OR A circular coil of cross-sectional area 200 cm2 and 20 turns is rotated about the vertical diameter with an angular speed of 50 rad s–1 in a uniform magnetic field of magnitude 3.0 × 10–2 T. Calculate the maximum value of the current in the coil.   (2 Mark) Ans: Maximum value of emf = 600 mV Maximum induced current, Q.25. A rectangular coil of sides ‘l’ and ‘b’ carrying a current I is subjected to a uniform magnetic field, acting perpendicular to its plane. Obtain the expression for the torque acting on it.   (2 Mark) Ans: Equivalent magnetic moment of the coil, Section D All questions are compulsory. In the case of internal choices, attempt anyone. Q.26. The figure shows a metallic rod PQ of length l, resting on the smooth horizontal rails AB positioned between a permanent magnet's poles. The rails, the rod, and the magnetic field are in three mutually perpendicular directions. A galvanometer G connects the rails through a switch K. Assume the magnetic field to be uniform. Given the resistance of the closed-loop containing the rod is R. (i) Suppose K is open and the rod is moved with a speed v in the direction shown. Find the polarity and magnitude of induced emf. (ii) With K open and the rod moving uniformly, there is no net force on the rod PQ electrons even though they do experience a magnetic force due to the rod's motion. Explain. (iii) What is the induced emf in the moving rod if the magnetic field is parallel to the rails instead of perpendicular?   (3 Mark) Ans:  (i) |ε| = Bvl P is positive end Q is the negative end (ii) Magnetic force gets cancelled by electric force that generates an extra charge of opposite sign at rod ends. (iii)Induced emf is zero as the motion of rod not cutting field lines Q.27. Define electric flux and write its SI unit. The electric field components in the figure shown are : Ex = αx, Ey = 0, Ez = 0 where α = 100 N/C m Calculate the charge within the cube, assuming a = 0.1 m.   (3 Mark) Ans: Definition of Electric flux SI unit Formula (Gauss's Law) Calculation of Charge within the cube Electric Flux is the dot product of the electric field and area vector. also, accept SI Unit: Nm2 /C or volt-meter For a given case OR (i) Monochromatic light of wavelength 589 nm is incident from the air on a water surface. If μ for water is 1·33, find the refracted light's wavelength, frequency, and speed. (ii) A double convex lens is made of a glass of refractive index 1·55, with both faces of the same radius of curvature. Find the radius of curvature required if the focal length is 20 cm.   (3 Mark) Ans: Therfore, R = (20 × 1.10) cm = – 22 cm Q.28. A toroidal solenoid of mean radius 20 cm has 4000 turns of wire wound on a ferromagnetic core of relative permeability 800. Calculate the magnetic field in the core for a current of 3 A passing through the coil. How does the field change when a core of Bismuth replaces this core?   (3 Mark) Ans: Formula for the magnetic field of a toroid Calculation of magnetic field Effect of change of core = 9.6 T Since Bismuth is diamagnetic, its μr < Therefore, the magnetic field in the core will get very much reduced. Given : Mean radius of toroidal solenoid = 20 cm Number of turns of wire wound = 4000 Relative permeability of ferromagnetic core = 800 Current passing through the coil = 3 A The magnetic field in a toroid coil : As Bismuth is a diamagnetic substance with a relative permeability of less than 1, it will tend to move away from the stronger to the weak part of the external magnetic field, making the core field less than the empty core field. OR (i) How are electromagnetic waves produced? Explain. (ii) A plane electromagnetic wave travels through a medium along the +ve z-direction. Depict the electromagnetic wave showing the oscillating electric and magnetic fields' directions.   (3 Mark) Ans: (i) Production of EM wave: Electromagnetic waves consist of both electric and magnetic fields travelling through space with the speed of light c. These waves oscillate in perpendicular planes with respect to each other and are in phase. An electromagnetic wave can be created by accelerating charges, moving charges back and forth, producing oscillating electric and magnetic fields. When the accelerating charged particle moves with acceleration, magnetic and electric fields change continuously, leading to the production of electromagnetic waves. (ii) Q.29. What is relaxation time? Derive an expression for resistivity of a wire in terms of the number density of free electrons and relaxation time.   (3 Mark) Ans: Definition and Derivation. Detailed answer: (i) Relaxation time shows the effect of collisions among the electrons and ions or impurities on electrical conduction in a metal. The time is taken for the drift velocity to decay 1/e of its initial value. As drift velocity increases, relaxation time decreases since the electrons move the distance they frequently collide faster. (ii) When a potential difference V is applied across a conductor of length l, then drift speed of electron will result as : The electric current through the conductor and drift speed are linked as I = neAvd where, n = number density of electrons e = electronic charge A = area of cross-section vd = electron drift speed Q.30. (i) Three-point charges q, – 4q and 2q are placed at the vertices of a 3 equilateral triangle ABC of side ‘l’ as shown in the figure. Obtain the expression for the magnitude of the resultant electric force acting on the charge q. (ii) Find out the amount of the work done to separate the charges at infinite distance.   (3 Mark) Ans: (i) Finding the magnitude of the resultant force on charge q Force on charge q due to the charge – 4q The forces F1 and F2 are inclined to each other at an angle of 120° Hence, resultant electric force on charge q (ii) Finding the work done Net P.E. of the system Section E All questions are compulsory. In the case of internal choices, attempt any one. Q.31. (a) Why are photodiodes preferably operated under reverse bias when the current in the forward bias is known to be more than that in reverse bias? (b) The two optoelectronic devices: Photodiode and solar cell, have the same working principle but differ in terms of their operation process. Explain the difference between the two devices in terms of : (i) biasing (ii) junction area (iii) I-V characteristics.    (5 Mark) Ans: (a) The fractional change in majority charge carriers is very less compared to the the fractional change in minority charge carriers on illumination. (b) The difference in the working of two devices : Photodiode Solar cell (i) Biasing Used in reverse biasing No external biasing is given (ii) Junction Area Small Large for solar radiation to be incident on it. (iii)I-V characteristics OR (i) Using Bohr’s postulates, derive the expression for the electron's total energy in the stationary states of the hydrogen atom. (ii) Using Rydberg's formula, calculate the wavelengths of the spectral lines of the first member of the Lyman series and Balmer series.    (5 Mark) Ans: (ii) Rydberg's formula: For the first member of the Lyman series, Q.32. (i) State Faraday’s laws of electromagnetic induction. (ii) The magnetic field through a circular loop of wire 12 cm in radius and 8.5 W resistance changes with time, as shown in the figure. The magnetic field is perpendicular to the plane of the loop. Calculate the induced current in the loop and plot it as a function of time. (iii) Show that Lenz’s law is a consequence of conservation of energy   (5 Mark) Ans: (i) Faraday’s Laws of Electromagnetic Induction: Faraday’s First Law of Electromagnetic Induction states that whenever a conductor is placed in a varying magnetic field, emf is induced, which is known as induced emf. If the conductor circuit is closed, the current is also induced, called induced current. Faraday’s Second Law of Electromagnetic Induction states that the induced emf is equal to the rate of change of flux linkage where flux linkage is the product of the number of turns in flux associated with the coil. eB is the magnetic flux through the circuit as With N loops of similar area in a circuit and ϕB being the flux through a loop, then emf is induced in every loop, making Faraday law as where, e = Induced emf [V], N = number of turns in the coil ∆ϕ= change in the magnetic flux [Wb], = change in time [s] The negative sign means that e opposes its cause. (ii) (iii) Similarly : Lenz's Law: The polarity of induced emf is such that it tends to produce a current which opposes the change in magnetic flux that produced it. Explanation : When the north pole of a bar magnet is pushed towards the close coil, the magnetic flux through the coil increases and the current is induced in the coil in such a direction that it opposes the increase in flux. This is possible when the coil's induced current is in the anti-clockwise direction. The opposite will happen when the north pole is moved away from the coil. In either case, it is the work done against the force of magnetic repulsion/attraction that gets ‘converted‘ into the induced emf. So Lenz's law is a consequence of the conservation of energy. OR (a) Write the expression for the equivalent magnetic moment of a planar current loop of area A, having N turns and carrying a current i. Use the expression to find the magnetic dipole moment of a revolving electron. (b) A circular loop of radius r, having N turns and carrying current I, is kept in the XY plane. It is then subjected to a uniform magnetic field to Obtain an expression for the magnetic potential energy of the coil-magnetic field system.    (5 Mark) Ans: (a) The equivalent magnetic moment is given by μ = NiA The direction of m is perpendicular to the plane of the current-carrying loop. It is directed along the direction of advance of a right-handed screw rotated along the current flow direction. Derivation of expression for μ of an electron revolving around a nucleus For 1 revolution of electron, N = 1 Current = Putting in the expression, μ= NiA (b) for the loop, Magnetic potential energy = Q.33. (i) Derive an expression for drift velocity of electrons in a conductor. Hence deduce Ohm’s law. (ii) A wire whose cross-sectional area is increasing linearly from one end to another is connected across a V volts battery. Which of the following quantities remain constant in the wire? (a) drift speed (b) current density (c) electric current (d) electric field Ans: (i) Derivation of the expression for drift velocity Deduction of Ohm's law Let an electric field E be applied to the conductor. The acceleration of each electron is Velocity gained by the electron Let the conductor contain n electrons per unit volume. The average value of time 't', between their successive collisions, is the relaxation time, 't'. Hence average drift velocity, vd = The amount of charge, crossing area A, in time Δt, is = neAvdΔt = IΔt Substituting the value of vd, we get But I = JA, where J is the current density, This is Ohm's law [Note: Credit should be given if the student derives the alternative form of Ohm's law by substituting (ii) Name of quantity and justification (b) Current density will remain constant in the wire. All other quantities depend on the cross-sectional area of the wire. Detailed Answer: Out of these, current density remains constant in a wire whose cross-sectional area increases linearly from its one end to other as current density is : It is a current per unit area that depends on the area of cross-section. Drift speed is given as : OR A capacitor of capacitance C1 is charged to a potential V1 while another capacitance C2 is charged to a potential difference V2. The capacitors are now disconnected from their respective charging batteries and connected in parallel to each other (i) Find the total energy stored in the two capacitors before they are connected. (ii) Find the total energy stored in the parallel combination of the two capacitors. (iii) Explain the reason for the difference of energy in parallel combination compared to the total energy before they are connected.    (5 Mark) Ans: (i) Finding the total energy before the capacitors are connected (ii) Finding the total energy in the parallel combination (iii)Reason for difference (i) We have Energy stored in a capacitor = 1/2 CV2 Therefore, Energy stored in the charged capacitors Therefore, the total energy stored = (ii) Let V be the potential difference across the parallel combination. Equivalent capacitance = (C+ C2) Since charge is a conserved quantity, we have (C1 + C2)V = C1V1 + C2V2 Therefore, the total energy stored in the parallel combination (iii) The total energy of the parallel combination is different (less) from the total energy before the capacitors are connected. This is because some energy gets used up due to the movement of charges. Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity! 67 docs , , , , , , , , , , , , , , , , , , , , , ;
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234158754348755, "perplexity": 1025.155101002154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00623.warc.gz"}
http://en.wikipedia.org/wiki/Wave_impedance
# Wave impedance The wave impedance of an electromagnetic wave is the ratio of the transverse components of the electric and magnetic fields (the transverse components being those at right angles to the direction of propagation). For a transverse-electric-magnetic (TEM) plane wave traveling through a homogeneous medium, the wave impedance is everywhere equal to the intrinsic impedance of the medium. In particular, for a plane wave travelling through empty space, the wave impedance is equal to the impedance of free space. The symbol Z is used to represent it and it is expressed in units of ohms. The symbol η (eta) may be used instead of Z for wave impedance to avoid confusion with electrical impedance. The wave impedance is given by $Z = {E_0^-(x) \over H_0^-(x)}$ where $E_0^-(x)$ is the electric field and $H_0^-(x)$ is the magnetic field, in phasor representation. In terms of the parameters of an electromagnetic wave and the medium it travels through, the wave impedance is given by $Z = \sqrt {j \omega \mu \over \sigma + j \omega \varepsilon}$ where μ is the magnetic permeability, ε is the electric permittivity and σ is the electrical conductivity of the material the wave is travelling through. In the equation, j is the imaginary unit, and ω is the angular frequency of the wave. In the case of a dielectric (where the conductivity is zero), the equation reduces to $Z = \sqrt {\mu \over \varepsilon }.$ As usual for any electrical impedance, the ratio is defined only for the frequency domain and never in the time domain. ## Wave impedance in free space In free space the wave impedance of plane waves is: $Z_0 = \sqrt{\frac{\mu_0} {\epsilon_0}}$ and: $c_0 = \frac{1}{\sqrt{\mu_0 \epsilon_0}}$ hence, to the same accuracy as the 1983 definition of c, the value in ohms is: $Z_0 = \mu_0 c_0 = 376.730313$ ## Wave impedance in an unbounded dielectric In a perfect dielectric, $\scriptstyle \mu=4\pi \times 10^{-7}$ H/m and $\varepsilon = \varepsilon_r \times 8.854\times 10^{-12}$ F/m. So, the value of wave impedance in a perfect dielectric is $Z \approx {377 \over \sqrt {\varepsilon_r} }\,\Omega$. In a perfect dielectric, the wave impedance can be found by dividing Z0 by the square root of the dielectric constant. In anything else, the formula becomes larger and a complex number is the result. ## Wave impedance in a waveguide For any waveguide in the form of a hollow metal tube, (such as rectangular guide, circular guide, or double-ridge guide), the wave impedance of a travelling wave is dependent on the frequency $f$, but is the same throughout the guide. For transverse electric (TE) modes of propagation the wave impedance is $Z = \frac{Z_{0}}{\sqrt{1 - \left( \frac{f_{c}}{f}\right)^{2}}} \qquad \mbox{(TE modes)},$ where fc is the cut-off frequency of the mode, and for (TM) modes $Z = Z_{0} \sqrt{1 - \left( \frac{f_{c}}{f}\right)^{2}} \qquad \mbox{(TM modes)}$ Above the cut-off (f > fc), the impedance is real (resistive) and the wave carries energy. Below cut-off the impedance is imaginary (reactive) and the wave is evanescent. These expressions neglect the effect of resistive loss in the walls of the waveguide. For a waveguide entirely filled with a homogeneous dielectric medium, similar expressions apply, but with the wave impedance of the medium replacing Z0. The presence of the dielectric also modifies the cut-off frequency fc. For a waveguide or transmission line containing more than one type of dielectric medium (such as microstrip), the wave impedance will in general vary over the cross-section of the line. ## References This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9589452147483826, "perplexity": 502.1078525243311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699721/warc/CC-MAIN-20140313024459-00053-ip-10-183-142-35.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/68614/why-do-these-inequalities-in-metric-spaces-hold
# Why do these inequalities in metric spaces hold? The other day I stumbled across some inequalities regarding properties of metric spaces. I'm curious to see a proof of why it holds. Suppose $(X,\rho)$ is any metric space. For a given $\epsilon\gt 0$, I let $N(X,\epsilon)$ denote the least $n$ such that $X=\bigcup\limits_{i=1}^n U_i$ where $U_i$ are sets such that $\operatorname{diam}(U_i)\leq 2\epsilon$. I also denote by $M(X,\epsilon)$ the greatest number of $m$ points $x_i$, $1\leq i\leq m$ such that $\rho(x_i,x_j)\gt\epsilon$ whenever $i\neq j$. With this notation, what is it that $N(X,\epsilon)\leq M(X,\epsilon)$ and $M(X,\epsilon)\leq N(X,\epsilon/2)$? Thanks. - For given $\epsilon$, pick a set of $M(X,\epsilon)$ points at distances greater than $\epsilon$, and form closed balls with radius $\epsilon$ around them. If there is a point in $X$ that belongs to none of these balls, we can add it to the set, contradicting the maximality of $M(X,\epsilon)$. Thus these $M(X,\epsilon)$ sets of diameter $2\epsilon$ cover $X$, and hence $N(X,\epsilon)\le M(X,\epsilon)$. For the other direction, note that a set with diameter $2\epsilon/2=\epsilon$ can contain at most one point of a set of $M(X,\epsilon)$ points at distances greater than $\epsilon$; thus we need at least $M(X,\epsilon)$ such sets to cover $X$. - Let $S$ be a set of the greatest number of points $x_i$, $1 \leq i \leq M(X,\epsilon) =: m$ such that $\rho(x_i, x_j) > \epsilon$ (call this property P). Define balls centered around $x_i$ with radius $\epsilon$, $B(x_i, \epsilon) = \{ y \in X | \rho(x_i, y) \leq \epsilon \}$ Claim: $\displaystyle X = \bigcup _{1 \leq i \leq m} B(x_i, \epsilon)$ Proof: $\bigcup _{1 \leq i \leq m} B(x_i, \epsilon) \subseteq X$ is obvious. Consider an arbitrary $x \in X$. If $\forall i \leq m, \rho (x, x_i) > \epsilon$, then we can add $x$ to $S$ and which will contradict the fact that it is the greatest set with property P. Thus there must exist at least one index $1 \leq j \leq m$ such that $\rho (x, x_j) \leq \epsilon$. But this means $x \in B(x_j,\epsilon)$ and thus $x \in \bigcup _{1 \leq i \leq m} B(x_i, \epsilon)$. Hence $X \subseteq \bigcup _{1 \leq i \leq m} B(x_i, \epsilon)$ and we are done. Due to the fact that diam$(B(x_i,\epsilon)) \leq 2\epsilon$ and the claim, we have: $N(X,\epsilon) \leq m$ The idea for the second inequality is to consider the family of sets $T = \{ U_i\}$ for $1 \leq i \leq N(X,\frac{\epsilon}{2})$. Construct an injective map $\phi:S \to T$. $\phi$ maps $x_i$ to a set in the family $T$ that contains $x_i$. Can you complete the argument? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960659742355347, "perplexity": 60.04142109627006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00410-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/directional-derivatives-of-functions-from-rn-to-rm
Directional Derivatives of Functions from Rn To Rm Directional Derivatives of Functions from Rn To Rm We recent defined what the concept of a partial derivative of a function $\mathbf{f} : S \to \mathbb{R}^m$ (where $S \subseteq \mathbb{R}^n$ is open). We now extend this definition to the more general directional derivative. Definition: Let $S \subseteq \mathbb{R}^n$ be open, $\mathbf{c} \in S$, and $\mathbf{f} : S \to \mathbb{R}^m$. Let $\mathbf{u} \in \mathbb{R}^n$. Then the Directional Derivative of $\mathbf{f}$ at $\mathbf{c}$ in the Direction of $\mathbf{u}$ is defined as $\displaystyle{\mathbf{f}'(\mathbf{c}, \mathbf{u}) = \lim_{h \to 0} \frac{\mathbf{f}(\mathbf{c} + h \mathbf{u}) - \mathbf{f}(\mathbf{c})}{h}}$ provided that this limit exists. There are a few important things to point out. First, $\mathbf{f}$ is a function which maps elements (vectors) in an open subset $S$ of $\mathbb{R}^n$ to elements (vectors) in $\mathbb{R}^m$. Secondly, if $\mathbf{u} = 0$ then $\mathbf{f}'(\mathbf{c}, \mathbf{u}) = \mathbf{f} (\mathbf{c}, \mathbf{0}) = \mathbf{0}$. Thirdly, note that $\mathbf{f}'(\mathbf{c}, \mathbf{u})$ is itself a vector in $\mathbb{R}^m$! If $\mathbf{u} = \mathbf{e}_k$ then the definition of the directional derivative of $\mathbf{f}$ in the direction of the unit vector in the direction of the $k^{\mathrm{th}}$ coordinate axis is simply the partial derivative of $\mathbf{f}$ with respect to the $k^{\mathrm{th}}$ variable. We now state an important theorem regarding directional derivatives. Theorem 1: Let $S \subseteq \mathbb{R}^n$ be open, $\mathbf{c} \in S$, and $\mathbf{f} : S \to \mathbb{R}^m$ with $\mathbf{f} = (f_1, f_2, ..., f_m)$. Let $\mathbf{u} \in \mathbb{R}^n$. Then the directional derivative of $\mathbf{f}$ at $\mathbf{c}$ in the direction of $\mathbf{u}$ exists if and only if the single variable vector-derivatives $f_k'(\mathbf{c}, \mathbf{u})$ exist for all $k \in \{ 1, 2, ..., m \}$ and $\mathbf{f}'(\mathbf{c} + h\mathbf{u}) = (f_1'(\mathbf{c}, \mathbf{u}), f_2'(\mathbf{c}, \mathbf{u}), ..., f_m'(\mathbf{c}, \mathbf{u}))$ • Proof: We write: (1) \begin{align} \quad \mathbf{f}'(\mathbf{c}, \mathbf{u}) &= \lim_{h \to 0} \frac{\mathbf{f}(\mathbf{c} + h\mathbf{u}) - \mathbf{f}(\mathbf{c})}{h} \\ &= \lim_{h \to 0} \frac{(f_1(\mathbf{c} + h\mathbf{u}), f_2(\mathbf{c} + h\mathbf{u}), ..., f_m(\mathbf{c} + h\mathbf{u})) - (f_1(\mathbf{c}), f_2(\mathbf{c}), ..., f_m(\mathbf{c}))}{h} \\ &= \lim_{h \to 0} \frac{(f_1(\mathbf{c} + h\mathbf{u}) - f_1(\mathbf{c}), f_2(\mathbf{c} + h\mathbf{u}) - f_2(\mathbf{c}), ..., f_m(\mathbf{c} + h\mathbf{u}) - f_m(\mathbf{c}))}{h} \\ &= \left ( \lim_{h \to 0} \frac{f_1(\mathbf{c} + h\mathbf{u}) - f_1(\mathbf{c})}{h}, \lim_{h \to 0} \frac{f_2(\mathbf{c} + h\mathbf{u}) - f_2(\mathbf{c})}{h}, ..., \lim_{h \to 0} \frac{f_m(\mathbf{c} + h\mathbf{u}) - f_m(\mathbf{c})}{h} \right ) \\ &= ( f_1'(\mathbf{c}, \mathbf{u}), f_2'(\mathbf{c}, \mathbf{u}), ..., f_m'(\mathbf{c}, \mathbf{u})) \quad (*) \end{align} • $\Rightarrow$ if $\mathbf{f}'(\mathbf{c} + h\mathbf{u})$ exists then each coordinate of the directional derivative exists and by $(*)$ this implies that the directional derivatives of the coordinate functions, $f_k'(\mathbf{c} + h\mathbf{u})$ exist for each $k \in \{1, 2, ..., m \}$. • $\Leftarrow$ Suppose that $f_k'(\mathbf{c} +h\mathbf{u})$ exists for each $k \in \{1, 2, ..., m \}$. Then by $(*)$ each coordinate of $\mathbf{f}'(\mathbf{c} + h\mathbf{u})$ is finite and exists so $\mathbf{f}'(\mathbf{c} + h\mathbf{u})$ exists. $\blacksquare$ Corollary 1: Let $S \subseteq \mathbb{R}^n$ be open, $\mathbf{c} \in S$, and $\mathbf{f} : S \to \mathbb{R}^m$ with $\mathbf{f} = (f_1, f_2, ..., f_m)$. Then the directional derivative of $\mathbf{f}$ at $\mathbf{c}$ with respect to the $k^{\mathrm{th}}$ variable exists if and only if the single variable vector-derivatives $D_k f_j(\mathbf{c})$ exist for all $j \in \{ 1, 2, ..., m \}$ and $D_k (\mathbf{f}(\mathbf{c}) = (D_k f_1(\mathbf{c}, D_k f_2(\mathbf{c}), ..., D_k f_m (\mathbf{c}))$ • Proof: Set $\mathbf{u} = \mathbf{e}_k$ where $\mathbf{e}_k = (0, 0, ..., 0, \underbrace{1}_{k^{\mathrm{th}} \: coordinate}, 0, ..., 0)$. Then by Theorem 1 we have that: (2) \begin{align} \quad \mathbf{f}'(\mathbf{c}, \mathbf{e}_k) & = (f_1'(\mathbf{c}, \mathbf{e}_k), f_2'(\mathbf{c}, \mathbf{e}_k), ..., f_m'(\mathbf{c}, \mathbf{e}_k)) \\ D_k \mathbf{f} (\mathbf{c}) & = (D_k f_1(\mathbf{c}), D_k f_2(\mathbf{c}), ..., D_k f_m (\mathbf{c}) \quad \blacksquare \end{align}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000007152557373, "perplexity": 382.8998022984497}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578624217.55/warc/CC-MAIN-20190424014705-20190424040705-00480.warc.gz"}
https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html
# BetaPrimeDistribution represents a beta prime distribution with shape parameters p and q. BetaPrimeDistribution[p,q,β] represents a generalized beta prime distribution with scale parameter β. BetaPrimeDistribution[p,q,α,β] represents a generalized beta distribution of the second kind with shape parameter α. # Background & Context • BetaPrimeDistribution[p,q,α,β] represents a continuous statistical distribution defined over the interval and parametrized by four positive real numbers p, q, α, and β. The parameters p, q, and α are known as "shape parameters", β is known as a "scale parameter", and together, these parameters determine the overall shape of the probability density function (PDF) of the beta prime distribution. Depending on the values of p, q, α, and β, the PDF of the beta prime distribution may be unimodal or monotonic decreasing with potential singularities approaching the lower boundary of its domain. In addition, the tails of the PDF are "fat" in the sense that the PDF decreases algebraically rather than exponentially for large values . (This behavior can be made quantitatively precise by analyzing the SurvivalFunction of the distribution.) • BetaPrimeDistribution[p,q,α,β] is sometimes referred to as the generalized beta distribution of the second kind, the inverted beta distribution, or the type VI Pearson distribution (PearsonDistribution). The two- and three-argument forms and BetaPrimeDistribution[p,q,β] evaluate to BetaPrimeDistribution[p,q,1,1] and BetaPrimeDistribution[p,q,1,β], respectively, and are sometimes referred to as the standard beta prime distribution and the generalized beta prime distribution, respectively. • In Bayesian analysis, the beta prime distribution arises as a prior distribution for binomial proportions expressed as odds. The beta prime distribution has also been found to model many real-world phenomena. For example, the beta prime distribution has proven useful in empirically estimating security returns and in the development of option pricing models. More recently, it has been applied to the modeling of insurance loss processes. Elsewhere, the long tail of the beta prime distribution has been shown to make the distribution particularly well suited to modeling the frequency of behaviors likely to transmit diseases among individuals versus the actual transmission of such diseases. • RandomVariate can be used to give one or more machine- or arbitrary-precision (the latter via the WorkingPrecision option) pseudorandom variates from a beta prime distribution. Distributed[x,BetaPrimeDistribution[p,q,α,β]], written more concisely as xBetaPrimeDistribution[p,q,α,β], can be used to assert that a random variable x is distributed according to a beta prime distribution. Such an assertion can then be used in functions such as Probability, NProbability, Expectation, and NExpectation. • The probability density and cumulative distribution functions may be given using PDF[BetaPrimeDistribution[p,q,α,β],x] and CDF[BetaPrimeDistribution[p,q,α,β],x]. The mean, median, variance, raw moments, and central moments may be computed using Mean, Median, Variance, Moment, and CentralMoment, respectively. • DistributionFitTest can be used to test if a given dataset is consistent with a beta prime distribution, EstimatedDistribution to estimate a beta prime parametric distribution from given data, and FindDistributionParameters to fit data to a beta prime distribution. ProbabilityPlot can be used to generate a plot of the CDF of given data against the CDF of a symbolic beta prime distribution and QuantilePlot to generate a plot of the quantiles of given data against the quantiles of a symbolic beta prime distribution. • TransformedDistribution can be used to represent a transformed beta prime distribution, CensoredDistribution to represent the distribution of values censored between upper and lower values, and TruncatedDistribution to represent the distribution of values truncated between upper and lower values. CopulaDistribution can be used to build higher-dimensional distributions that contain a beta prime distribution, and ProductDistribution can be used to compute a joint distribution with independent component distributions involving beta prime distributions. • BetaPrimeDistribution is related to a number of other distributions. For example, BetaPrimeDistribution[p,q,a,b] simplifies to DagumDistribution[p,a,b] when , to SinghMaddalaDistribution[q,a,b] when , and to when both and . In addition, the two-parameter form has the same PDF as the type VI Pearson distribution PearsonDistribution[6,1,f/g,1/g,1/g,0] where and and is related to both type II and type IV versions of ParetoDistribution.The PDF of BetaPrimeDistribution is a transformation of that of BetaDistribution, while the four-parameter version BetaPrimeDistribution[p,q,a,1] is the quotient of two independent random variables XGammaDistribution[p,1,a,0] and YGammaDistribution[q,1,a,0]. BetaPrimeDistribution is also related to FRatioDistribution, DirichletDistribution, KumaraswamyDistribution, NoncentralBetaDistribution, and PERTDistribution. # Examples open allclose all ## Basic Examples(12) Probability density function of a beta prime distribution: Cumulative distribution function for a beta prime distribution: Mean and variance of a beta prime distribution: Median of a beta prime distribution: Probability density function of a generalized beta prime distribution: Cumulative distribution function of a generalized beta prime distribution: Mean and variance of a generalized beta prime distribution: Median of a generalized beta prime distribution: Probability density function of a generalized beta distribution of the second kind: Cumulative distribution function of a generalized beta distribution of the second kind: Mean and variance of a generalized beta distribution of the second kind: Median of a generalized beta distribution of the second kind: ## Scope(9) Generate a sample of pseudorandom numbers from a beta prime distribution: Compare its histogram to the PDF: Distribution parameters estimation: Estimate the distribution parameters from sample data: Compare the density histogram of the sample with the PDF of the estimated distribution: Skewness: For a generalized beta distribution of the second kind, skewness does not depend on β: Kurtosis: For a generalized beta distribution of the second kind, kurtosis does not depend on β: Different moments with closed forms as functions of parameters: Closed form for symbolic order: Different moments for a generalized beta distribution of the second kind: Closed form for symbolic order: Hazard function for a beta prime distribution: Generalized beta prime distribution: Generalized beta distribution of the second kind: Quantile function of a beta prime distribution: Generalized beta prime distribution: Generalized beta distribution of the second kind: Consistent use of Quantity in parameters yields QuantityDistribution: Find the median loss: ## Applications(2) BetaPrimeDistribution can be used to model losses: Remove the clear outlier, Andrew, the most destructive hurricane, and attach currency units: Fit generalized beta distribution to the data: Compare the histogram of the data with the PDF of the estimated distribution: Find the probability that a loss caused by a hurricane is over 3 billion dollars: Find the mean hurricane loss in US dollars: Simulate possible losses in millions of US dollars for the next 30 strong hurricanes: BetaPrimeDistribution can be used to model state per-capita incomes: Fit generalized beta distribution of the second kind to the data: Compare the histogram of the data to the PDF of the estimated distribution: Find the average income per capita: Find states with income close to the average: Find the median income per capita: Find states with income close to the median: Find the log-likelihood value: ## Properties & Relations(16) Parameter influence on the CDF of a generalized beta distribution of the second kind: BetaPrimeDistribution is closed under scaling by a positive factor: BetaPrimeDistribution is closed under taking inverse: Relations to other distributions: DagumDistribution is a special case of BetaPrimeDistribution: LogLogisticDistribution is a special case of BetaPrimeDistribution: FRatioDistribution is a special case of BetaPrimeDistribution: Beta prime distribution is a special case of the type 6 PearsonDistribution: ParetoDistribution type II is related to BetaPrimeDistribution: ParetoDistribution type IV is related to BetaPrimeDistribution: Beta prime distribution can be obtained as a transformation of BetaDistribution: Generalized beta distribution of the second kind is the distribution of the ratio of two independent random variables from GammaDistribution: Generalized beta of the second kind simplifies to beta prime: Generalized beta prime is a special case of generalized beta of the second kind: Generalized beta prime simplifies to beta prime distribution: ## Neat Examples(1) PDFs for different q values with CDF contours: Wolfram Research (2010), BetaPrimeDistribution, Wolfram Language function, https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html (updated 2016). #### Text Wolfram Research (2010), BetaPrimeDistribution, Wolfram Language function, https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html (updated 2016). #### BibTeX @misc{reference.wolfram_2020_betaprimedistribution, author="Wolfram Research", title="{BetaPrimeDistribution}", year="2016", howpublished="\url{https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html}", note=[Accessed: 28-February-2021 ]} #### BibLaTeX @online{reference.wolfram_2020_betaprimedistribution, organization={Wolfram Research}, title={BetaPrimeDistribution}, year={2016}, url={https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html}, note=[Accessed: 28-February-2021 ]} #### CMS Wolfram Language. 2010. "BetaPrimeDistribution." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2016. https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html. #### APA Wolfram Language. (2010). BetaPrimeDistribution. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/BetaPrimeDistribution.html
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757434725761414, "perplexity": 1553.930001442404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00246.warc.gz"}
http://mathoverflow.net/questions/8232/chromatic-number-of-graphs-of-tangent-closed-balls/9131
Chromatic number of graphs of tangent closed balls The Koebe–Andreev–Thurston theorem gives a characterization of planar graphs in terms of disjoint circles being tangent. For every planar graph $G$ there is a disk packing whose graph is $G$. What happens when disks are replaced by closed balls? By closed balls of higher dimension? I have already asked one question about this here: Graphs of Tangent Spheres The question I want to ask here is what is known about the chromatic numbers of these graphs? I have updated the numbers and changed the arguments in the following based on some of the answers. Assume the chromatic number is 14 or more and we have the smallest such graph that is colorable with 14 or more colors. Take one of the smallest closed balls then since the kissing number for three dimensions is 12 there are at most 12 closed balls tangent to this closed ball. Remove this closed ball then the remaining graph can be colored in 13 or less colors. Color it with 13 colors. Then add the closed ball back in since it is tangent to only 12 closed balls it can be given one of the thirteen colors so we have the entire graph can be colored with thirteen colors which gives a contradiction so the chromatic number must be 13 or less. We have an lower bound of 6 from a spindle constructed according to David Eppstein's answer. Can we improve on the 6 to 13 range? We have the lower bound is a quadratic function and we have an upper bound that is exponential. Which of these two is right? Is there a case where closed balls of different sizes raise the chromatic number from closed balls the same size? Finally based on the existing chromatic numbers I am wondering if it is possible to answer this question. Is there a dimension where the chromatic number of the unit distance graph is different from the chromatic number of the graphs in that dimension of tangent closed balls. The unit distance graph is the set of all points in the $n$-dimensional space with two points connected if their distance is one. For dimension two the chromatic number is known to be in the range from 4 to 7. For dimension three the range is 6 to 15. For the graphs of tangent disks we have a chromatic number of 4 and for closed balls a range from 6 to 13. So the possibility that the chromatic numbers of the two types of graphs are the same has not yet been eliminated. So the specific question is what is known and what can be proved about the chromatic number of the graphs of tangent closed balls? - Excellent question! I've been thinking about the fact that sphere packings exhibit a "phase transition" in high dimensions, where the densest packings and highest kissing numbers tend to come from very non-rigid and even random-looking configurations. This seems like it would change the nature of the problem in high dimensions, but I don't have any evidence for this other than my gut feeling. –  Harrison Brown Dec 14 '09 at 3:31 By the way, was "disjoint spheres" or "disjoint disks" intended here? Because even in 2d the problems are different: disjoint disks give the Koebe-etc result and a tight bound of four colors, while disjoint circles (no three mutually tangent) can require five colors and it's an open problem of Ringel whether five suffice — see ics.uci.edu/~eppstein/junkyard/tangencies –  David Eppstein Mar 12 '10 at 20:54 Yes you are right, the terminology should be changed or the problem is changed. I have tried to change this here. So it should be disks I think hopefully the change fixes this. –  Kristal Cantwell Mar 12 '10 at 22:30 It's easy to form sets of five mutually-tangent spheres (say, three equal spheres with centers on an equilateral triangle, and two more spheres with their centers on the line perpendicular to the triangle through its centroid). Based on this, I think it should be possible to construct a set of spheres analogous to the Moser spindle [http://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson_problem] that requires six colors: spheres a, b, and c, where a and b have four mutual neighbors that are all adjacent to each other, a and c have another four mutual neighbors that are all adjacent to each other, neither a and b nor a and c are adjacent, but b and c are adjacent. I have no idea how tight this lower bound might be, but it's at least better than four. - Inspired by Bodarenko's counter-example to the Borsuk conjecture, I recently find many ball packings whose chromatic number is significantly higher than the dimension. There tangency graphs are all strongly regular. A note is available on arXiv and here (up to date). Here I list their parameters, dimensions and lower bounds (Lov\'asz number) for the chromatic numbers. Many of these parameters are for the complement to the more famous graph, e.g. Higman-Sims graph. • $(100, 77, 60, 56)$ (Higman-Sims graph), dimension 22, $\chi\ge 80/3$. • $(105, 72, 51, 45)$, dimension 20, $\chi\ge 25$. • $(120, 77, 52, 44)$, dimension 20, $\chi\ge 80/3$. • $(126, 75, 48, 39)$, dimension 20, $\chi\ge 26$. • $(162, 105, 72, 60)$ (Local McLaughlin), dimension 21, $\chi\ge 36$. • $(175, 102, 65, 51)$, dimension 21, $\chi\ge 35$. • $(176, 105, 68, 54)$, dimension 21, $\chi\ge 36$. • $(176, 85, 48, 34)$, dimension 22, $\chi\ge 88/3$. • $(243, 132, 81, 60)$ (Delsarte graph), dimension 22 $\chi\ge 45$. • $(253, 140, 87, 65)$, dimension 22, $\chi\ge 143/3$. • $(275, 162, 105, 81)$ (McLaughlin graph), dimension 22, $\chi\ge 55$. • $(276, 135, 78, 54)$, dimension 23, $\chi\ge 46$. • $(729, 520, 379, 350)$, dimension 112, $\chi\ge 621/5$. Furthermore, there are two infinite families with high chromatic number (here $q$ is a prime power). • $(q^3, (q+1)(q^2-1)/2, (q+3)(q^2-3)/4+1, (q+1)(q^2-1)/4)$ (complements to Hubaut's C20), dimension $q^2-q$, $\chi\ge q^2$. • $((q^3+1)(q+1), q^4, (q^2+1)(q-1)q, q^3(q-1))$ (complement to the point graph of the generalized quadrangle $(q,q^2)$), dimension $q^3-q^2+q$, $\chi=q^3+1$. Note that the last case is an equality. This is the first non-constant lower bound for $\chi-d$. Hope this helps future improvement. - I would guess that the unit distance graph has a higher chromatic number than the tangency graph of a sphere packing in high dimensions, but this is surely an open question. Here are some known results: The best known lower bound for the chromatic number of the unit distance graph of Euclidean n-space is by Raigorodskii (Electronic Notes in Discrete Mathematics 28 (2007) 273–280): $1.239\dots^n$. On the other hand, the best upper bound for the chromatic number of the tangency graph of a packing of spheres in dimension n that I can think of is the following simple-minded one: Let $\kappa_n$ denote the kissing number in n-dimensional Euclidean space. This is the maximum number of non-overlapping unit spheres that can touch some fixed unit sphere. Then the chromatic number of the tangency graph of a sphere packing is at most $1+\kappa_n$. This is seen using a greedy colouring as follows: take a sphere of smallest radius. Since all spheres touching it have radius at least as large, their number is bounded above by $\kappa_n$. So we can colour this sphere and remove it from the graph. Repeat until the graph is empty. By the Kabatiansky-Levenshtein bound (Problems of Information Transmission 14 (1978) 1–17), $\kappa_n\leq 1.32042\dots^{n}$. This is some distance away from the unit distance lower bound, and I guess it won't be easy to decide whether the unit distance chromatic number is really larger when the dimension is large. - Is there any reason to believe that either of the Raigorodskii and Kabatiansky-Levenshtein bounds are close to best possible, though? Just a tiny improvement in either constant would solve the problem... –  Harrison Brown Dec 14 '09 at 3:51 No, I think both can be improved, but it will be quite difficult. The distance between the two bases is not just a tiny amount IMHO. Also, improving the Kabatiansky-Levenshtein bound will be a big deal. –  Konrad Swanepoel Dec 14 '09 at 8:18 D'oh! For some reason I read the constant in Raigorodskii's bound as 1.293 instead of 1.239. (Apparently I'm temporarily dyslexic when I'm looking at this page.) That's where the hope that someone could push the base just a bit higher came from, but 0.09 is a lot bigger than 0.03. –  Harrison Brown Dec 14 '09 at 16:56 I think I misunderstand something: the bounds of Raigorodskii and Kabatiansky-Levenshtein are for different problems! It is like comparing apples and alligators! –  Boris Bukh Dec 17 '09 at 12:17 @Boris: One of the questions that Kristal asks above (in the last paragraph), is whether it can be shown if the maximum chromatic number of tangency graphs of a packing of balls in dimension n (alligator) is different from the chromatic number of the n-dimensional unit distance graph (apple). My answer was just an attempt to explain what I know about both numbers, and that I think it would be difficult to find out if alligators are smaller than apples ;-) –  Konrad Swanepoel Dec 17 '09 at 14:26 I find a paper of Hiroshi Maehara (http://link.springer.com/article/10.1007%2Fs00373-007-0702-7). He studies packing of a) closed balls, b) balls on a table, c) unit balls, d) unit balls within a restricted height. • For packing of closed balls, the chromatic number he obtained is between 6 and 13, as expected by many here. • For packing of balls on a table, the chromatic number is between 5 and 6; • For packing of unit balls, the chromatic number is between 5 and 10. • For packing of unit balls within height $0$ and $2+\sqrt2$, the chromatic number is between 4 and 6. • For packing of unit balls with restricted height, the cromatic number is at least 4 or 5 depending on the restriction. - Guys, I don't understand it. I just posted a better lower bound in another answer mathoverflow.net/a/195846/20595, then this answer got two votes ... what is happening? –  Hao Chen Feb 6 at 22:12 When you bump a question, all answers get more attention. –  Douglas Zare Feb 11 at 11:05 One of the questions I asked above was if the addition of different size spheres changes maximum chromatic number. I can prove it does not for dimension equals 2. We have the set of graphs of tangent circles is the same as the set of planar graphs. Now the maximum chromatic number of a planar graph is 4 by the four color theorem. What is needed is a graph of circles of the same radius which has chromatic number four. Assume that every such graph can be colored with three colors Now look any four circles of unit radius $a$, $b$, $c$ and $d$ with $a$, $b$ and $c$ mutually tangent and $b$, $c$ and mutually tangent. The only way this can happen is if $a$, $b$ and $c$ have their centers forming an equilateral triangle of side 2 as do $b$, $c$ and $d$. If there is a three coloring we have $a$ and $d$ forced to be the same color. We can arrange a series of these graphs in a cycle such that $a$, $d$,... $x$ have the same color and $x$ is tangent to $a$. This will give a contradiction. So this graph has chromatic number four and we are done. So the next case to look at is dimension 3. By an argument similar to above the chromatic number of graphs of tangent unit spheres is at least 5 and we have the chromatic number for these graphs is 9 or less by "On the independence number of coin graphs" by János Pach and Géza Tóth, in Geombinatorics, vol. 6, num. 1, 1996, p. 30-33. So we have the range 5 to 9 for maximal chromatic number of graphs of tangent three dimensional spheres of unit radius as opposed to the range 6 to 12 for maximal chromatic number of graphs of tangent three dimensional spheres. - Graphs of tangent unit disks in the plane are called "penny graphs". Finding an example requiring four colors such as you do above is exercise 8.4.7 of <i>Pearls in Graph Theory</i>, Hartsfield and Ringel, Dover 2003, p.177. –  David Eppstein Dec 13 '09 at 19:53 Certainly 12 is an upper bound for dimension 3 with not necessarily equal radii. To see this, it suffices to show that any such graph has minimum degree at most 11. Take a smallest sphere whose centre is extreme in some dimension (e.g. leftmost). This sphere cannot touch 12 other spheres, because if it did, all these spheres would have to be smallest as well, and one would be further left, contradicting our choice. - It is possible for 12 slightly larger spheres to touch a smallest one. But you're right, then one of them would have to be more to the left. G. Kertesz (Nine points on the hemisphere. Colloq. Math. Soc. J. Bolyai (Intuitive Geometry, Szeged 1991) 63 (1994), 189–196) showed that at most 8 unit spheres can touch an open hemisphere of a unit sphere, so in fact you have minimum degree 8. –  Konrad Swanepoel Mar 12 '10 at 18:14 I don't understand the first sentence in the second paragraph. What if the smallest is not extreme? If you are talking about unit spheres, then order the sphere by height and you get one-side kissing number plus one as upper bound. If you allow the spheres to have different radii, then order the spheres by size and you get the kissing number plus one as the upper bound. How can you order them by two criterions at the same time? –  Hao Chen Feb 7 at 14:57 A clarification: Choose the sphere $S$ to be smallest, and subject to that choose it to be extreme. I claim without proof that if a sphere kisses 12 others of equal or greater radius, then all 13 spheres have equal radius. Furthermore the sphere kissed by 12 others has its centre in the convex hull of the other 12 centres, and therefore is not extreme (of these 13 spheres) in any direction. –  Andrew D. King Feb 12 at 17:58 One of the questions I asked was if there was a difference between the chromatic number of the unit distance graph and the graphs of tangent spheres. In dimension 2 there is an answer to this under a certain choice of axioms under the axiom system ZF+DC+LM. In that case the chromatic number of the plane is five or greater. Then the chromatic number of the unit distance graph will be bigger than either the maximum chromatic number graphs of tangent circles or the maximum chromatic number of graphs of unit spheres which are both 4. The axiom system makes all sets measurable that is LM, the DC is dependent choice which is weaker than the axiom of choice. This result is from the following paper: Axiom of choice and chromatic number of the plane Journal of Combinatorial Theory Series A Volume 103 , Issue 2 (August 2003) Pages: 387 - 391 I think it is also discussed in The Mathematical Coloring Book by Alexander Soifer. - For $n$ we can get $n+2$ either by using the Moser spindle as noted in another answer or simply by taking a simplex which has $n+1$ spheres each tangent to each other and then inserting a smaller sphere whose center is the center of the simplex and which is tangent to the other spheres this set of $n+2$ spheres tangent to each other needs $n+2$ colors so we have the chromatic number is $n+2$ or more. For dimension 5 we can take all points whose coordinates are one and zero and which have an even number of 1's. There are 16 such points. If two points are different at least two coordinates are different. Suppose we have an 7 coloring of these points with all coordinates differing by 2. Then three of them must have the same color. Then no two points with four ones can have the same color or they would differ by two coordinates. The point with all zeros must have different color than any point with two ones or they would differ by two coordinates. No three points with two ones can have the same color or two would share the same coordinate value and they would differ by two coordinates. So These points must be colored with eight different colors two avoid two the same color differing by two coordinates. So if we look at the spheres of radius $\sqrt2$ centered about the points with zeros and ones as coordinates with an even number of ones. They form a graph of tangent spheres in five dimensional space with chromatic number 8. We can take a sphere centered at the point whose coordinates are all 1/2 and choose a radius to get it tangent to all the spheres in the graph this graph will have chromatic number 9. So we have the chromatic number of 5 dimensional space is 9 or more. Here again this is realizable only because $\sqrt2$ is the minimum distance between these points a lot of similar constructions that work for the chromatic number of a space don't work because they use a distance other than the minimum. - Either I or Cibulka misunderstand the lemma of Erdös and Sös. The graph in your first paragraph is a Johnson graph J(n,3), it's chromatic number is at most $n$ (Graham and Sloane, 1980). I then checked the paper of Larman and Rogers, it seems that Equation 2.1 in Cibulka's thesis should be about the clique number, not the independence number. Would you please confirm these things? –  Hao Chen Jan 10 '14 at 21:05 FYI, in the paper of Larman and Rogers, the lemma goes like this: "if more than n' (n'=n, n+1, n-1 as you said) triples are chosen from n objects, at least one pair of triples have EXACTLY one element in common" –  Hao Chen Jan 10 '14 at 21:23 I was able to recheck the problem one year later. It's actually not a mistake of Cibulka. His "2-distance graph" is not hamming distance, but Euclidean distance. So two vertices are connected if they differ in 4 (not 2) coordinates (i.e. share exactly one coordinate). This is the same setting of Larman and Rogers. However, the graph you discribe in the first paragraph is a $\sqrt{2}$-graph, so you can not use the lemma of Erd\"os and S\'os. –  Hao Chen Oct 20 '14 at 11:56 I think your objection about the lemma I used is right and I cant fix it so I have removed the first two paragraphs which had this result. –  Kristal Cantwell Feb 6 at 22:35 @cantwell, thank you for response. Your halved-cube example is wonderful, and was one of the motivation behind this answer. Actually, his problem can be regarded as the "opposite" of the Borsuk conjecture, which is very interesting. –  Hao Chen Feb 6 at 22:43 You can improve the upper bound in dimension 3 to 12. Brooks' theorem says that a graph with maximum degree $n$ has chromatic number equal to $n+1$ if and only if the graph is a complete graph or an odd cycle. So the only graph that wouldn't be 12-colorable would be $K_{13}$. But you can easily check that there's no way to make 11 unit spheres touch two kissing spheres in dimension 3 without overlap. Edit: Ahh, that bound's already been beaten I see! Still, I think a similar argument will give an upper bound of 24 for the chromatic number of tangent graphs of 4-spheres, and probably $\kappa_n$ rather than $\kappa_n + 1$ for the chromatic number of tangent graphs of n-spheres. Not much of an improvement, but an improvement. More generally, though, I want to propose an approach to try to asymptotically beat the Kabatiansky-Levenshtein bound. First note that the class of graphs that can be realized as tangent graphs of unit $n$-spheres is closed under taking subgraphs. So if we can show that every such graph has a vertex of degree at most $\delta(n)$, then we can color greedily to get $\chi(G) \leq \delta(n) + 1$. This is the approach taken by Pach and Toth, for instance. I suggest that we try to bound the average degree of a tangent graph of unit $n$-spheres. I think this may well be asymptotically smaller than the kissing number in large dimension, basically since the lattice kissing number is so much smaller than the nonlattice kissing number in general. This is more doable than bounding the minimum degree, I think, since the average degree is robust against small local changes. Do I have any idea how to actually bound the average degree? Nothing that seems all that promising, unfortunately, although it's probably worth looking into the coding-theory analogue of the average degree. One idea I did have was to consider a "thin subpacking" which would essentially have codimension 1 -- and whose removal would disconnect the graph -- and try to induct on dimension. The problem is, unless the packing's a laminated lattice, such a "thin subpacking" doesn't really correspond to a packing in one less dimension, and I can't fix this problem in a way that gives me a reasonable bound. But maybe someone else can. - What do you mean by a tangent graph of unit n-spheres? Is this in a packing of unit spheres, or in any collection of (possibly intersecting) unit spheres, which would be equivalent to the unit distance graph in Kristal's question? In the non-overlapping case, by considering the average degree, you are essentially counting (double) the number of occurrences of the minimum distance in a set of n points. Here the average degree is bounded above by the kissing number and from below essentially by the lattice kissing number. In the overlapping case (unit distance graph) there is no bound... –  Konrad Swanepoel Dec 14 '09 at 16:22 ... on the average degree, because of the Lenz construction. –  Konrad Swanepoel Dec 14 '09 at 16:23 @Konrad: Packing of unit spheres -- I was aiming for "graph of tangent spheres" but got confused. So a lot of the post is wildly speculative, but basically I conjecture that the average degree in a graph of tangent spheres grows as the lattice kissing number rather than the kissing number. In high dimensions the lattice kissing number should grow relatively slowly, although I don't have an actual upper bound. (Does someone else?) –  Harrison Brown Dec 14 '09 at 16:38 For four dimensions the graph has a minimum number of tangent spheres of 24 at a minimial sphere but elsewhere their could be 25 more spheres tangent to a sphere because not are all the same size. So I don't see how I can get the number below 25. Also it is not clear that the closest packing is always a lattice in 10 dimensions there are irregular packings which have greater density see en.wikipedia.org/wiki/Sphere_packing. –  Kristal Cantwell Dec 15 '09 at 2:18 @Kristal: My post is entirely about the case where all the spheres have radius 1. I agree that Brooks' theorem doesn't help if the spheres are different sizes. And yes, the irregularity of good packings in high dimension is what I want to try to take advantage of! If it's true that the average kissing number of the spheres in any packing grows at a rate comparable to the lattice kissing number rather than irregular packings, and that seems plausible, we could get bounds resembling the lattice number, which would be nice. –  Harrison Brown Dec 15 '09 at 2:39 I can show that the maximum chromatic number of a graph of tangent unit hyperspheres in four dimensions is 19 or less. Take the two hyperspheres in the graph at whose centers have maximum distance $d$. This line between the centers will intersect each hypersphere twice two of these points of interesection will be between the two centers and the other two will contain the other four points between them. One these latter two points construct hyperplanes perpendicular to the line. If any hypersphere cuts the hyperplane associated with one of these two hyperspheres or lies on the other side of this hyperplane then the distance between its center and the other hypersphere will be greater than $d$ so any hypersphere that is tangent to one of these hypersphere is will lie on the halfspace determined by the hyperplane that contains the hypersphere. That means we with regards to this hyperspere and hyperplane the definition of one-sided kissing number is satisfied. We have the one-sided kissing number of four dimensional space is 18. See this paper. So this sphere can have at most 18 tangent hyperspheres. Then its chromatic number will be at most 19. - Once again, by Brooks' theorem, we can improve that to 18 if we can show that $K_18$ can't be represented as a graph of tangent unit 4-spheres! :) (Can we show that, by the way? I haven't done the math, but the volume argument seems like it might fail. But that's a pretty weak analysis, so presumably something better will work.) –  Harrison Brown Dec 16 '09 at 21:57 @Harrison: You can have at most n+1 pairwise tangent unit spheres in dimension n. If you take one of the centers at the origin, then it is not difficult to show that the other centers are linearly independent. –  Konrad Swanepoel Dec 16 '09 at 22:06 That's... Huh. Wow, yeah, I can't believe I didn't think of that. Essentially the same proof but more geometrically stated: the abstract simplicial complex corresponding to an m-simplex isn't a (geometric) simplicial complex if you embed it in n-space, for n < m-1. Which I knew, but totally didn't think about. So the chromatic number is at most 18. –  Harrison Brown Dec 16 '09 at 22:45 There are hyperspheres with more than 18 tangent spheres in this graph The minimum number of tangencies is 18 at two sphere so that limits the chromatic number to 19. I think Brooks theorem applies in this case only to graphs with an a maximum number of adjacent points at 18 for all points. There are only two special points which have this property of having the one-sided kissing number 18 others could have as many as 24 hyperspheres adjacent. –  Kristal Cantwell Dec 16 '09 at 23:33 @Kristal: You're right, of course; I misunderstood the argument, and somehow my sanity check failed. (I know that the kissing number in dimension 4 is 24!) –  Harrison Brown Dec 17 '09 at 1:47 Wondering why Andrew D King's answer (or Konrad Swanepoel's comment on it) was not upvoted. (As a newbie, I cannot, nor can I comment.) If we cheekily let $\hat{\kappa}_n$ denote the hemispherical kissing number in $n$-dimensional space, defined here to mean the maximum number of mutually disjoint $n$-dimensional unit hyperspheres tangentially adjacent to a $n$-dimensional unit hemihypersphere, then the corresponding chromatic number is at most $\hat{\kappa}_n+1$ (by Andrew's argument). Konrad mentioned that $\hat{\kappa}_3\le8$, giving an upper bound of $9$ for colouring sphere packings. For larger $n$, certainly the base in the Kabatiansky-Levenshtein bound on $\kappa_n$ can be beaten for $\hat{\kappa}_n$! Edit: My original post would have implied a nice short proof of the $5$-colour theorem, but alas there are easy counterexamples to the claim that the maximum minimum degree (a.k.a. degeneracy) is at most $\hat{\kappa}_n$. However, $\kappa_n$ is an upper bound for colouring $n$-dimensional hypersphere packings using Andrew's argument. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8845821619033813, "perplexity": 367.81506970009144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462548.53/warc/CC-MAIN-20150226074102-00228-ip-10-28-5-156.ec2.internal.warc.gz"}
https://deepai.org/publication/convergent-adaptive-hybrid-higher-order-schemes-for-convex-minimization
# Convergent adaptive hybrid higher-order schemes for convex minimization This paper proposes two convergent adaptive mesh-refining algorithms for the hybrid high-order method in convex minimization problems with two-sided p-growth. Examples include the p-Laplacian, an optimal design problem in topology optimization, and the convexified double-well problem. The hybrid high-order method utilizes a gradient reconstruction in the space of piecewise Raviart-Thomas finite element functions without stabilization on triangulations into simplices or in the space of piecewise polynomials with stabilization on polytopal meshes. The main results imply the convergence of the energy and, under further convexity properties, of the approximations of the primal resp. dual variable. Numerical experiments illustrate an efficient approximation of singular minimizers and improved convergence rates for higher polynomial degrees. Computer simulations provide striking numerical evidence that an adopted adaptive HHO algorithm can overcome the Lavrentiev gap phenomenon even with empirical higher convergence rates. ## Authors • 9 publications • 2 publications 11/30/2020 ### Unstabilized Hybrid High-Order method for a class of degenerate convex minimization problems The relaxation in the calculus of variation motivates the numerical anal... 09/17/2020 ### A stabilizer free weak Galerkin finite element method on polytopal mesh: Part III A weak Galerkin (WG) finite element method without stabilizers was intro... 12/17/2020 ### A priori error analysis of high-order LL* (FOSLL*) finite element methods A number of non-standard finite element methods have been proposed in re... 06/17/2019 ### Gradient Flow Finite Element Discretizations with Energy-Based Adaptivity for the Gross-Pitaevskii Equation We present an effective adaptive procedure for the numerical approximati... 05/19/2021 ### High-Order Quadrature on Multi-Component Domains Implicitly Defined by Multivariate Polynomials A high-order quadrature algorithm is presented for computing integrals o... 09/16/2013 ### On Convergent Finite Difference Schemes for Variational - PDE Based Image Processing We study an adaptive anisotropic Huber functional based image restoratio... 07/08/2021 ### p-refined RBF-FD solution of a Poisson problem Local meshless methods obtain higher convergence rates when RBF approxim... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035954833030701, "perplexity": 2298.5115229330654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00702.warc.gz"}
http://cp3-origins.dk/a/18557
The V-A structure of the weak interactions leads to definite amplitude hierarchies in exclusive heavy-to-light decays mediated by $b \to (d,s)\gamma$ and $b \to (d,s) \ell \bar{\ell}$. However, the extraction of right-handed currents beyond the Standard Model is contaminated by V-A long-distance contributions leaking into right-handed amplitudes. We propose that these quantum-number changing long-distance contributions can be controlled by considering the almost parity-degenerate vector meson final states by exploiting the opposite relative sign of left- versus right-handed amplitudes. For example, measuring the time-dependent rates of a pair of vector $V(J^P=1^-)$ and axial $A(1^+)$ mesons in $B \to (V,A) \gamma$, up to an order of magnitude is gained on the theory uncertainty prediction, controlled by long-distance ratios to the right-handed amplitude. This renders these decays clean probes to null tests, from the theory side.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696950912475586, "perplexity": 1693.9613362581933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216051.80/warc/CC-MAIN-20180820082010-20180820102010-00691.warc.gz"}
http://math-mprf.org/journal/articles/id810/
Filling the Hypercube in the Supercritical Contact Process in Equilibrium #### A. Simonis 1998, v.4, №1, 113-130 ABSTRACT We consider the supercritical d-dimensional contact process, obtaining new results on the asymptotic distribution of the first occurrence time of an anomalous density of particles in a fixed region of the space, when the process starts from equilibrium. In particular, we get uniform sharp bounds for the rates of convergence in distribution to a mean 1 exponential random variable, when suitably rescaled, of the first time for which the hypercube is totally occupied. Keywords: contact process,occurrence time of a rare event,large deviations,infinite particle system
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462446331977844, "perplexity": 748.0608365530472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00607.warc.gz"}
https://research.nsu.ru/ru/publications/search-for-exclusive-higgs-and-z-boson-decays-to-%CF%95%CE%B3-and-%CF%81%CE%B3-with-t
# Search for exclusive Higgs and Z boson decays to ϕγ and ργ with the ATLAS detector The ATLAS collaboration Результат исследования: Научные публикации в периодических изданияхстатьярецензирование 14 Цитирования (Scopus) ## Аннотация A search for the exclusive decays of the Higgs and Z bosons to a ϕ or ρ meson and a photon is performed with a pp collision data sample corresponding to an integrated luminosity of up to 35.6 fb−1 collected at s=13 TeV with the ATLAS detector at the CERN Large Hadron Collider. These decays have been suggested as a probe of the Higgs boson couplings to light quarks. No significant excess of events is observed above the background, as expected from the Standard Model. Upper limits at 95% confidence level were obtained on the branching fractions of the Higgs boson decays to ϕγ and ργ of 4.8 × 10−4 and 8.8 × 10−4, respectively. The corresponding 95% confidence level upper limits for the Z boson decays are 0.9 × 10−6 and 25 × 10−6 for ϕγ and ργ, respectively. Язык оригинала английский 127 37 Journal of High Energy Physics 2018 7 https://doi.org/10.1007/JHEP07(2018)127 Опубликовано - 1 июл. 2018 ## Fingerprint Подробные сведения о темах исследования «Search for exclusive Higgs and Z boson decays to ϕγ and ργ with the ATLAS detector». Вместе они формируют уникальный семантический отпечаток (fingerprint).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964582920074463, "perplexity": 4624.704360177876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00405.warc.gz"}
https://en.wikibooks.org/wiki/User:JMRyan/Test1
# User:JMRyan/Test1 ### Modus Tollens $\mathbf{T7.} \quad (\mathrm{P} \rightarrow \mathrm{Q}) \land \lnot \mathrm{Q} \rightarrow \lnot \mathrm{P}\,\!$ 1. $(\mathrm{P} \rightarrow \mathrm{Q}) \land \lnot \mathrm{Q}\,\!$ Assumption    $[(\mathrm{P} \rightarrow \mathrm{Q}) \land \lnot \mathrm{Q} \rightarrow \lnot \mathrm{P}]\,\!$ 2. $\mathrm{P}\,\!$ Assumption    $[\lnot \mathrm{P}]\,\!$ 3. $\mathrm{P} \rightarrow \mathrm{Q}\,\!$ 1 KE 4. $\mathrm{Q}\,\!$ 2, 3 CE 5. $\lnot \mathrm{Q}\,\!$ 1 KE 6. $\lnot \mathrm{P}\,\!$ 2–5 NI 7 $(\mathrm{P} \rightarrow \mathrm{Q}) \land \lnot \mathrm{Q} \rightarrow \lnot \mathrm{P}\,\!$ 1–6 CI Now we use T7 to justify the following rule. Modus Tollens (MT) $(\varphi \rightarrow \psi)\,\!$ $\underline{\lnot \psi \quad \quad \ }\,\!$ $\lnot \varphi\,\!$ Modus Tollens is also sometimes known as 'Denying the Consequent'. Note that the following is not an instance of Modus Tollens, at least as defined above. $\lnot \mathrm{P} \rightarrow \lnot \mathrm{Q}\,\!$ $\underline{\mathrm{Q} \quad \quad \quad \quad}\,\!$ $\mathrm{P}\,\!$ The premise lines of Modus Tollens are a conditional and the negation of its consequent. The premise lines of this inference are a conditional and the opposite of its consequent, but not the negation of its consequent. The desired inference here needs to be derived as below. 1 $\lnot \mathrm{P} \rightarrow \lnot \mathrm{Q} \,\!$ Premise 2 $\mathrm{Q}\,\!$ Premise 3 $\lnot \lnot \mathrm{Q}\,\!$ 2 DNI 4 $\lnot \lnot \mathrm{P}\,\!$ 1, 3 CE 5 $\mathrm{P}\,\!$ 4 DNE Of course, it is possible to prove as a theorem: $(\lnot \mathrm{P} \rightarrow \lnot \mathrm{Q}) \land \mathrm{Q} \rightarrow \mathrm{P}\ .\,\!$ Then you can add a new inference rule—or, more likeley, a new form of Modus Tollens—on the basis of this theorem. However, we won't do that here. Stuff {{Edit|User:JMRyan|stuff}} yeilds: Stuff stuff Stuff {{Edit2|User:JMRyan|stuff}} yeilds: Stuff edit stuff {{If|A|B|C}} yeilds: {{ safesubst:p{{ safesubst:#ifA:B|1|2}}|C|}} {{If||B|C}} yeilds: C $\phi\,\!$      $\psi\,\!$ ## Stages Wikibook Development Stages Sparse text Developing text Maturing text Developed text Comprehensive text Red: Yellow: Green:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944286108016968, "perplexity": 3859.1856078614383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274994.48/warc/CC-MAIN-20160524002114-00137-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/46566/is-the-statement-that-every-field-has-an-algebraic-closure-known-to-be-equivalen
# Is the statement that every field has an algebraic closure known to be equivalent to the ultrafilter lemma? The existence and uniqueness of algebraic closures is generally proven using Zorn's lemma. A quick Google search leads to a 1992 paper of Banaschewski, which I don't have access to, asserting that the proof only requires the ultrafilter lemma. Questions: • Is it known whether the two are equivalent in ZF? • Would anyone like to give a quick sketch of the construction assuming the ultrafilter lemma? I dislike the usual construction and am looking for others. - Every field has an algebraic closure is "Form 69" in consequences.emich.edu/conseq.htm , and it doesn't list any equivalent forms to it. –  Willie Wong Nov 19 '10 at 1:28 (BTW: credit where credit's due: I learned of that website from Andres mathoverflow.net/questions/45928/… ) –  Willie Wong Nov 19 '10 at 1:29 Not exactly what you asked, but there was some discussion on the Foundations of Mathematics mailing list about whether the existence of an algebraic closure of Q is equivalent to the assumption that a countable union of finite sets is countable. See this post by Harvey Friedman cs.nyu.edu/pipermail/fom/2006-May/010541.html and this post by Andreas Blass cs.nyu.edu/pipermail/fom/2006-May/010551.html pointing out some difficulties. –  Timothy Chow Nov 19 '10 at 2:10 Hi Timothy. Thanks for finding these posts, I thought they were more recent and couldn't find them. –  Andres Caicedo Nov 19 '10 at 2:18 Qiaochu, using the link I provided in my answer to this question, you find that this question is still open (or was, as of the mid 2000s, and I haven't heard of any recent results in this direction). (According to the site's notation, the existence of algebraic closures is form 69, the ultrafilter theorem is form 14, uniqueness of the algebraic closure (in case they exist) is form 233; these numbers can be found by entering appropriate phrases in the last entry form in the page linked to above.) It is known that uniqueness implies neither existence nor the ultrafilter theorem. It is open whether existence implies uniqueness or the ultrafilter theorem, and also whether (existence and uniqueness) implies the ultrafilter theorem. (Enter 14, 69, 233 in Table 1 in the link above for these implications/non-implications.) Jech's book on the axiom of choice should provide the proofs of the known implications and references, and the book by Howard-Rubin (besides updates past the publication date of Jech's book) provides references for the known non-implications. Here are some details on Banaschewski's paper: 1. First, lets see that the ultrafilter theorem can be used to prove uniqueness of algebraic closures, in case they exist. Let $K$ be a field, and let $E$ and $F$ be algebraic closures. We need to show that there is an isomorphism from $E$ onto $F$ fixing $K$ (pointwise). Following Banaschewski, denote by $E_u$ (resp., $F_u$) the splitting field of $u\in K[x]$ inside $E$ (resp., $F$); we are not requiring that $u$ be irreducible. We then have that if $u|v$ then $E_u\subseteq E_v$ and $F_u\subseteq F_v$. Also, since $E$ is an algebraic closure of $K$, we have $E=\bigcup_u E_u$, and similarly for $F$. Denote by $H_u$ the set of all isomorphisms from $E_u$ onto $F_u$ that fix $K$; it is standard that $H_u$ is finite and non-empty (no choice is needed here). If $u|v$, let $\varphi_{uv}:H_v\to H_u$ denote the restriction map; these maps are onto. Now set $H=\prod_{u\in K[x]} H_u$ and for $v|w$, let $$H_{vw}=\{(h_u)\in H\mid h_v=h_w\upharpoonright E_v\}.$$ Then the Ultrafilter theorem ensures that $H$ and the sets $H_{vw}$ are non-empty. This is because, in fact, Tychonoff for compact Hausdorff spaces follows from the Ultrafilter theorem, see for example the exercises in Chapter 2 of Jech's "The axiom of choice." Also, the sets $H_{vw}$ have the finite intersection property. They are closed in the product topology of $H$, where each $H_u$ is discrete. It then follows that the intersection of the $H_{vw}$ is non-empty. But each $(h_u)$ in this intersection determines a unique embedding $h:\bigcup_uE_u\to\bigcup_u F_u$, i.e., $h:E\to F$, which is onto and fixes $K$. 2. Existence follows from modifying Artin's classical proof. For each monic $u\in K[x]$ of degree $n\ge 2$, consider $n$ "indeterminates" $z_{u,1},\dots,z_{u,n}$ (distinct from each other, and for different values of $u$), let $Z$ be the set of all these indeterminates, and consider the polynomial ring $K[Z]$. Let $J$ be the ideal generated by all polynomials of the form $$a_{n-k}-(-1)^k\sum_{i_1\lt\dots\lt i_k}z_{u,i_1}\dots z_{u,i_k}$$ for all $u=a_0+a_1x+\dots+a_{n-1}x^{n-1}+x^n$ and all $k$ with $1\le k\le n$. The point is that any polynomial has a splitting field over $K$, and so for any finitely many polynomials there is a (finite) extension of $K$ where all admit zeroes. From this it follows by classical (and choice-free) arguments that $J$ is a proper ideal. We can then invoke the ultrafilter theorem, and let $P$ be any prime ideal extending $J$. Then $K[Z]/P$ is an integral domain. Its field of quotients $\hat K$ is an extension of $K$, and we can verify that in fact, it is an algebraic closure. This requires to note that, obviously, $\hat K/K$ is algebraic, and that, by definition of $J$, every non-constant polynomial in $K[x]$ split into linear factors in $\hat K$. But this suffices to ensure that $\hat K$ is algebraically closed by classical arguments (see for example Theorem 8.1 in Garling's "A course in Galois theory"). 3. The paper closes with an observation that is worth making: It follows from the ultrafilter theorem, and it is strictly weaker than it, that countable unions of finite sets are countable. This suffices to prove uniqueness of algebraic closures of countable fields, in particular, to prove the uniqueness of $\bar{\mathbb Q}$. - Who'd have thunk that just mentioning your name would summon an expert? :) –  Willie Wong Nov 19 '10 at 1:36 Thanks, Andres! I would really appreciate any details about Banaschewski's paper. –  Qiaochu Yuan Nov 19 '10 at 9:27 @Qiaochu: Send me an email if you want a copy of the paper. –  Andres Caicedo Nov 19 '10 at 22:45 @Andres: thanks for the details; it's more than enough. –  Qiaochu Yuan Nov 19 '10 at 23:05 As I mentioned in a comment to Eivind Dahl's answer, it seems that there is also an easy argument directly from the Compactness theorem of first order logic. Since you said you are looking for alternative constructions, let me expand on the idea here. The Compactness theorem asserts that if every finite subset of a first order theory has a model, then the whole theory has a model. Andres mentioned that this is equivalent to the Ultrafilter lemma. Existence. Let $F$ be a field. Let $T$ be the theory consisting of the field axioms, the atomic diagram of $F$ (which asserts all the equations and negated equations true in $F$, using constants for elements of $F$), plus the assertions that every polynomial over $F$ has a root. This last assertion is made separately as an assertion about each particular polynomial over $F$, using the constants in the language added for the elements of $F$. This theory is easily seen to be finitely satisfiable, since any finite subtheory mentions only finitely many polynomials of $F$, and we can satisfy it in a finite extension of $F$. Thus, by the Compactness theorem, the whole theory has a model $K$. If we take the collection of elements of $K$ that are algebraic over $F$, this will be algebraically closed. Uniqueness. Let $F$ be a field and let $E$ and $K$ be algebraic closures of it. Let $T$ be the theory consisting of the union of the atomic diagrams of $E$ and $K$, plus the field axioms. (Note that we haven't added any axioms saying that elements of $E$ and $K$ are distinct, and in the end these constants will be in effect melded together, providing the isomorphism.) If $T_0$ is a finite subtheory, then only finitely many elements of $E$ and $K$ appear. Those elements of $E$ and $K$ appear in some $F[\vec a]\subset E$ and $F[\vec b]\subset K$. We can embed both of these extensions of $F$ into a single finite extension $F[\vec u]$, which will satisfy all assertions in $T_0$. (Note, this embedding effectively decides a little piece of the isomorphism, by mapping some of the $\vec b$'s to some of the $\vec a$'s.) Thus, by Compactness, the whole theory is satisfiable. If $G$ is a model of $T$, then since $G$ interprets the atomic diagrams of $E$ and $K$, we get isomorphisms of $E$ and $K$ into subfields of $G$. These maps agree on $F$ and have a common range, which is the set of elements of $G$ that are algebraic over $F$. Thus, the composition is an isomorphism of $E$ and $K$. - Thanks! This argument is quite enlightening. –  Qiaochu Yuan Nov 20 '10 at 12:27 Ok, the idea for uniqueness here is very natural. Pretty argument. –  Andres Caicedo Nov 20 '10 at 15:44 That is really quite beautiful. –  Mike Shulman Nov 20 '10 at 18:00 Paolo Aluffi's book "Algebra: Chapter 0" mentions that compactness of first order logic is sufficient to prove this (p. 403). I don't know the ultrafilter lemma. - Hi Elvind, Compactness is in fact equivalent to the ultrafilter theorem. Does Aluffi give a reference, other than Banaschewski's paper? –  Andres Caicedo Nov 19 '10 at 22:42 Interesting! No, he just mentions it in a footnote, sorry. –  Eivind Dahl Nov 19 '10 at 22:45 Can't you get it fairly easily from the Compactness theorem, since for any field $F$ you can write down the axioms of what it means to be a field extension of $F$ in which every polynomial over $F$ has a root. It is finitely consistent because of the finite extensions of $F$. If there is a model of the full theory, then the collection of all algebraic elements over $F$ will be algebraically closed, no? –  Joel David Hamkins Nov 20 '10 at 0:17 @Joel: And uniqueness? –  Andres Caicedo Nov 20 '10 at 2:06 Andres, I added an answer explaining it. –  Joel David Hamkins Nov 20 '10 at 12:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596141576766968, "perplexity": 195.23398305783832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00200-ip-10-180-212-252.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/265168/cohomological-dimension-dimension-of-modules-and-arithmetic-rank
# Cohomological dimension, dimension of modules and arithmetic rank Let $R$ be a noetherian ring, $I$ an ideal of $R$ and $M$ a finitely generated $R$- module. I know two facts: first, dimension of $M$ (i.e. Krull dimension of $R/{\rm ann}(M)$) is greater than or equal to cohomological dimension of $M$ with respect to $I$, and second, arithmetic rank of $I$ (i.e. $\inf\{r\in \mathbb{N}_0 \mid \exists x_1, \cdots ,x_r \in R~\mbox{such that}~ \sqrt{\langle x_1, \dots,x_r\rangle}=\sqrt{I}\}$) is greater then or equal to cohomological dimension of $M$ with respect to $I$. I wonder when equality holds... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429401159286499, "perplexity": 131.28807884321674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096773.65/warc/CC-MAIN-20150627031816-00119-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/198772/how-to-find-integer-solutions-for-x3-y2-0
# How to find integer solutions for $x^3 - y^2 = 0$? How can I find integer solutions for $x^3 - y^2 = 0$ ? In case that there are infinite number of solutions .How can we prove that ? and how to generate first few solutions ? - Edited to match the corrected question. HINT: If $x^3=y^2$, and $x$ is an integer, then $x$ must be a perfect square. (Why?) Thus, $x^3$ must be a sixth power. Conversely, if $x^3$ is a sixth power, it’s a perfect square, and you get a solution. - oh sorry I meant $x^3 - y^2 = 0$ , I edited the post –  Loers Antario Sep 18 '12 at 22:48 @Loers: I suspected that you did and was in the process of addressing that possibility when you made the edit. –  Brian M. Scott Sep 18 '12 at 22:50 aha , thanks , but what if y^2 was a quadratic equation in its standard form $ay^2+by+c$ , what can we do then ? –  Loers Antario Sep 18 '12 at 23:02 @Loers: Cry. :-) I don’t offhand know of a general technique for such equations, but it’s not my field at all, so there may be one. –  Brian M. Scott Sep 18 '12 at 23:07 The highest power of any prime dividing $x^3$ must be divisible by 3 and the highest power of any prime dividing $y^2$ must be divisible by 2. So the highest power of any prime dividing $x^3=y^2$ must be divisible by 6. This implies that $x^3=y^2$ can be written as $u^6$ for some integer $u$. It then immediately follows that $x=u^2$ and $y=u^3$ and conversely each such pair $x,y$ is indeed a solution to that equation, so that is exactly the solution set. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480391144752502, "perplexity": 159.23136245083273}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009968.66/warc/CC-MAIN-20141125155649-00220-ip-10-235-23-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3239496/lln-interpretation-of-high-dimensional-unit-ball-mass-distribution
# LLN interpretation of high-dimensional unit ball mass distribution This answer on Math Overflow points out that For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable. I can hardly make any sense of it. Anyway, here is my attempt: instead of addressing an $$N$$-dimensional unit ball, let's consider an $$N$$-dimensional unit cube first. Each point on that cube can be expressed with a coordinate consisting of $$N$$ numbers, i.e. $$(X_1, X_2, ..., X_N)$$, with $$X_i \stackrel{iid}{\sim} \text{Uniform}[-1, 1]$$. This independence is a result of the orthogonality of axes. It is trivial to show that $$E(X_i^2) = \frac{1}{3}$$. Applying the Law of Large Numbers, the (euclidean) distance from a point $$P$$ to $$\mathbb{O}$$ is then \begin{aligned} d(P, \mathbb{O}) &\stackrel{\text{def}}{=} \sqrt{X_1^2 + X_2^2 + ... + X_N^2} \\ E\left(d(P, \mathbb{O})^2\right) &= E\left(X_1^2 + X_2^2 + ... + X_N^2\right) \\ &\rightarrow \frac{N}{3} \end{aligned} Recall that we are interested in the unit ball, so we have to somehow calculate the distribution of $$d(P, \mathbb{O})^2 | d(P, \mathbb{O})^2 \le 1$$, but I'm lost here. My question is, how do you interpret the mass distribution of an $$N$$-dimensional unit ball with LLN? • +1 for raising the question. But you cannot interchange the expectation with a nonlinear operator. $E\left(d(P, \mathbb{O})\right) < \sqrt{\frac{N}{3}}$. – Hans May 27 at 18:57 • @Hans Thanks for pointing that out! I've edited the post to use $d^2$ instead. – nalzok May 28 at 2:24 • It seems to me LLN is not the right metaphor/whatever. The point is that if you have a large number of iid random variables uniformly distributed in ${-1,1}$, the probability that at least one of the random variables is within $\varepsilon$ of either -1 or 1 increases towards 1 as N goes to infinity. – pseudocydonia May 28 at 4:07 • @pseudocydonia I see your point, but how is this related to a high-dimensional unit ball? – nalzok May 28 at 5:38 • It applies most directly to the unit n-cube. – pseudocydonia May 28 at 6:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.994596004486084, "perplexity": 249.05994784561955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526337.45/warc/CC-MAIN-20190719182214-20190719204214-00269.warc.gz"}
http://math.stackexchange.com/questions/770235/variance-of-the-number-of-coin-toss-to-get-n-heads-in-row
# variance of the number of coin toss to get N heads in row The expected value of the number of coin toss to get N heads in a row is discussed here: How many flips of a fair coin does it take until you get N heads in a row? How can we find the variance? - The underlying stochastic recursion might help. Let $X_N$ denote the number of tosses needed to get $N$ heads in a row. At the time when one first gets $N$ heads in a row, either one gets a new head, and this yields $N+1$ heads in a row, or one gets a tail and then everything starts anew. Thus, for every $N\geqslant0$, one gets the key-identity $$\color{red}{X_{N+1}=X_N+1+B\bar X_{N+1},\quad B\sim\text{Bernoulli},\quad \bar X_{N+1}\sim X_{N+1}},$$ where $(B,\bar X_{N+1},X_N)$ is independent, $P(B=0)=P(B=1)=\frac12$, $\bar X_{N+1}$ is distributed like $X_{N+1}$, and the correct initialization is $\color{red}{X_0=0}$. This stochastic recursion fully encodes the distribution of every $X_N$ and it allows to compute recursively their characteristics. 1. Expectations Taking expectations of both sides of the key-identity, one gets $$E(X_{N+1})=E(X_N)+1+\tfrac12E(X_{N+1}),$$ hence $$E(X_{N+1})=2E(X_N)+2.$$ This is solved easily since $$E(X_{N+1})+2=2(E(X_N)+2),$$ hence $$E(X_N)=2^{N}(E(X_0)+2)-2=2\cdot(2^N-1).$$ 2. Variances The same representation yields the variances since $$X_{N+1}-E(X_{N+1})=X_N-E(X_N)+B\bar X_{N+1}-\tfrac12E(X_{N+1}),$$ hence $$\mathrm{var}(X_{N+1})=\mathrm{var}(X_N)+E(Z_N),$$ where $$Z_N=B\bar X_{N+1}^2-BE(X_{N+1})\bar X_{N+1}+\tfrac14E(X_{N+1})^2,$$ hence $$E(Z_N)=\tfrac12E(X_{N+1}^2)-\tfrac12E(X_{N+1})^2+\tfrac14E(X_{N+1})^2,$$ and $$\mathrm{var}(X_{N+1})=2\mathrm{var}(X_N)+\tfrac12E(X_{N+1})^2,$$ from which (I believe that) one gets (something similar to) $$\mathrm{var}(X_N)=2\cdot(2\cdot2^{2N}-2N\cdot2^N-1).$$ 3. Full distributions The key-identity also yields the full distribution of every $X_N$ since, for every $|s|\leqslant1$, $$E(s^{X_{N+1}})=E(s^{X_N})\cdot s\cdot E(s^{B\bar X_{N+1}}),$$ that is, $$E(s^{X_{N+1}})=E(s^{X_N})\cdot s\cdot \tfrac12(1+E(s^{X_{N+1}})),$$ hence $$E(s^{X_{N+1}})=\frac{s\cdot E(s^{X_N})}{2-s\cdot E(s^{X_N})}.$$ This can be rewritten as $$\frac1{E(s^{X_{N+1}})}-\frac{s}{2-s}=\frac2s\left(\frac1{E(s^{X_{N}})}-\frac{s}{2-s}\right).$$ Furthermore, $E(s^{X_0})=1$. Finally, $$E(s^{X_N})=\frac{(2-s)s^N}{2^{N+1}(1-s)+s^{N+1}}.$$ From this point, it is relatively straightforward to show that, for every $t\geqslant0$, $$\lim_{N\to\infty}E(\mathrm e^{-tX_N/2^N})=\frac1{1+2t},$$ which shows that $2^{-N}X_N$ converges in distribution to an exponential random variable of parameter $\frac12$, in symbols, $$\color{red}{\frac{X_N}{2^N}\stackrel{\text{dist.}}{\longrightarrow}2\cdot\Xi},\qquad\color{red}{\Xi\sim\mathcal E(1)}.$$ And, to fully complete this circle of ideas, note that $\Theta=2\cdot\Xi$ solves the identity $$\Theta\stackrel{\text{dist.}}{=}\tfrac12\cdot\Theta+B\cdot\bar\Theta,$$ with the obvious notations, and that the nondegenerate solutions of this identity are the exponential distributions. - At the moment I am not in the opportunity to provide a complete answer. But I suggest to look in the following direction: Let $T$ stand for the number of tosses needed to arrive at the first tail, and let $X$ stand for the number of tosses needed to arrive at $N$ heads on a row. $\mathbb{E}X$ is allready known from answers on the question you quoted , so to find $\text{Var}X=\mathbb{E}X^{2}-\left(\mathbb{E}X\right)^{2}$ it is enough to find $\mathbb{E}X^{2}$. $$\mathbb{E}X^{2}=\sum_{i=1}^{N}\mathbb{E}\left(X^{2}\mid T=i\right)P\left(T=i\right)+\mathbb{E}\left(X^{2}\mid T>N\right)P\left(T>N\right)$$ $$=\sum_{i=1}^{N}\mathbb{E}\left(i+X\right)^{2}2^{-i}+N^{2}2^{-N}$$ Making use of the expression for $\mathbb{E}X$ this allows you to find an expression for $\mathbb{E}X^2$. - This seems like a pretty complete answer to me. What, the computations needed to finish it? Not really a problem... –  Did Apr 26 at 18:31 To confess: I am just too lazy for that at the time. Next to that it is off course a good thing for the OP to do some work as well. –  drhab Apr 26 at 18:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654381275177002, "perplexity": 130.8436932036718}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889314.41/warc/CC-MAIN-20140722025809-00191-ip-10-33-131-23.ec2.internal.warc.gz"}
http://forecasting.svetunkov.ru/en/2017/11/20/smooth-package-for-r-common-ground-part-ii-estimators/
# «smooth» package for R. Common ground. Part II. Estimators ### A bit about estimates of parameters Hi everyone! Today I want to tell you about parameters estimation of smooth functions. But before going into details, there are several things that I want to note. In this post we will discuss bias, efficiency and consistency of estimates of parameters, so I will use phrases like “efficient estimator”, implying that we are talking about some optimisation mechanism that gives efficient estimates of parameters. It is probably not obvious for people without statistical background to understand what the hell is going on and why we should care, so I decided to give a brief explanation. Although there are strict statistical definitions of the aforementioned terms (you can easily find them in Wikipedia or anywhere else), I do not want to copy-paste them here, because there are only a couple of important points worth mentioning in our context. So, let’s get started. Bias refers to the expected difference between the estimated value of parameter (on a specific sample) and the “true” one. Having unbiased estimates of parameters is important because they should lead to more accurate forecasts (at least in theory). For example, if the estimated parameter is equal to zero, while in fact it should be 0.5, then the model would not take the provided information into account correctly and as a result will produce less accurate point forecasts and incorrect prediction intervals. In inventory this may mean that we constantly order 100 units less than needed only because the parameter is lower than it should be. Efficiency means that if the sample size increases, then the estimated parameters will not change substantially, they will vary in a narrow range (variance of estimates will be small). In the case with inefficient estimates the increase of sample size from 50 to 51 observations may lead to the change of a parameter from 0.1 to, let’s say, 10. This is bad because the values of parameters usually influence both point forecasts and prediction intervals. As a result the inventory decision may differ radically from day to day. For example, we may decide that we urgently need 1000 units of product on Monday, and order it just to realise on Tuesday that we only need 100. Obviously this is an exaggeration, but no one wants to deal with such an erratic stocking policy, so we need to have efficient estimates of parameters. Consistency means that our estimates of parameters will get closer to the stable values (what statisticians would refer to as “true” values) with the increase of the sample size. This is important because in the opposite case estimates of parameters will diverge and become less and less realistic. This once again influences both point forecasts and prediction intervals, which will be less meaningful than they should have been. In a way consistency means that with the increase of the sample size the parameters will become more efficient and less biased. This in turn means that the more observations we have, the better. There is a prejudice in the world of practitioners that the situation in the market changes so fast that the old observations become useless very fast. As a result many companies just through away the old data. Although, in general the statement about the market changes is true, the forecasters tend to work with the models that take this into account (e.g. Exponential smoothing, ARIMA). These models adapt to the potential changes. So, we may benefit from the old data because it allows us getting more consistent estimates of parameters. Just keep in mind, that you can always remove the annoying bits of data but you can never un-throw away the data. Having clarified these points, we can proceed to the topic of today’s post. ### One-step-ahead estimators of smooth functions We already know that the default estimator used for smooth functions is Mean Squared Error (for one step ahead forecast). If the residuals are distributed normally / log-normally, then the minimum of MSE gives estimates that also maximise the respective likelihood function. As a result the estimates of parameters become nice: consistent and efficient. It is also known in statistics that minimum of MSE gives mean estimates of the parameters, which means that MSE also produces unbiased estimates of parameters (if the model is specified correctly and bla-bla-bla). This works very well, when we deal with symmetric distributions of random variables. But it may perform poorly otherwise. In this post we will use the series N1823 for our examples: Plot the data in order to see what we have: N1823 series The data seems to have slight multiplicative seasonality, which changes over the time, but it is hard to say for sure. Anyway, in order to simplify things, we will apply an ETS(A,A,N) model to this data, so we can see how the different estimators behave. We will withhold 18 observations as it is usually done for monthly data in M3. N1823 and ETS(A,A,N) with MSE Here’s the output: It is hard to make any reasonable conclusions from the graph and the output, but it seems that we slightly overforecast the data. At least the prediction interval covers all the values in the holdout. Relative MAE is equal to 1.047, which means that the model produced forecasts less accurate than Naive. Let’s have a look at the residuals: QQ-plot of the residuals from ETS(A,A,N) with MSE The residuals of this model do not look normal, a lot of empirical quantiles a far from the theoretical ones. If we conduct Shapiro-Wilk test, then we will have to reject the hypothesis of normality for the residuals on 5%: This may indicate that other estimators may do a better job. And there is a magical parameter cfType in the smooth functions which allows to estimate models differently. It controls what to use and how to use it. You can select the following estimators instead of MSE: • MAE – Mean Absolute Error: \label{eq:MAE} \text{MAE} = \frac{1}{T} \sum_{t=1}^T |e_{t+1}| The minimum of MAE gives median estimates of the parameters. MAE is considered to be a more robust estimator than MSE. If you have asymmetric distribution, give MAE a try. It gives consistent, but not efficient estimates of parameters. Asymptotically, if the distribution of the residuals is normal, the estimators of MAE converge to the estimators of MSE (which follows from the connection between mean and median of normal distribution). Let’s see what happens with the same model, on the same data when we use MAE: N1823 and ETS(A,A,N) with MAE There are several things to note from the graph and the output. First, the smoothing parameter alpha is smaller than in the previous case. Second, Relative MAE is smaller than one, which means that model in this case outperformed Naive. Comparing this value with the one from the previous model, we can conclude that MAE worked well as an estimator of parameters for this data. Finally, the graph shows that point forecasts go through the middle of the holdout sample, which is reflected with lower values of error measures. The residuals are still not normally distributed, but this is expected, because they won’t become normal just because we used a different estimator: QQ-plot of the residuals from ETS(A,A,N) with MAE • HAM – Half Absolute Moment: \label{eq:HAM} \text{HAM} = \frac{1}{T} \sum_{t=1}^T \sqrt{|e_{t+1}|} This is even more robust estimator than MAE. On count data its minimum corresponds to the mode of data. In case of continuous data the minimum of this estimator corresponds to something not yet well studied, between mode and median. The paper about this thing is currently in a draft stage, but you can already give it a try, if you want. This is also consistent, but not efficient estimator. The same example, the same model, but HAM as an estimator: N1823 and ETS(A,A,N) with HAM This estimator produced even more accurate forecasts in this example, forcing smoothing parameters to become close to zero. Note that the residuals standard deviation in case of HAM is larger than in case of MAE which in its turn is larger than in case of MSE. This means that one-step-ahead parametric and semiparametric prediction intervals will be wider in case of HAM than in case of MAE, than in case of MSE. However, taking that the smoothing parameters in the last model are close to zero, the multiple steps ahead prediction intervals of HAM may be narrower than the ones of MSE. Finally, it is worth noting that the optimisation of models using different estimators takes different time. MSE is the slowest, while HAM is the fastest estimator. I don’t have any detailed explanation of this, but this obviously happens because of the form of the cost functions surfaces. So if you are in a hurry and need to estimate something somehow, you can give HAM a try. Just remember that information criteria may become inapplicable in this case. ### Multiple-steps-ahead estimators of smooth functions While these three estimators above are calculated based on the one-step-ahead forecast error, the next three are based on multiple steps ahead estimators. They can be useful if you want to have a more stable and “conservative” model (a paper on this topic is currently in the final stage). Prior to v2.2.1 these estimators had different names, be aware! • MSE$$_h$$ – Mean Squared Error for h-steps ahead forecast: \label{eq:MSEh} \text{MSE}_h = \frac{1}{T} \sum_{t=1}^T e_{t+h}^2 The idea of this estimator is very simple: if you are interested in 5 steps ahead forecasts, then optimise over this horizon, not one-step-ahead. However, by using this estimator, we shrink the smoothing parameters towards zero, forcing the model to become closer to the deterministic and robust to outliers. This applies both to ETS and ARIMA, but the models behave slightly differently. The effect of shrinkage increases with the increase of $$h$$. The forecasts accuracy may increase for that specific horizon, but it almost surely will decrease for all the other horizons. Keep in mind that this is in general not efficient and biased estimator with a much slower convergence to the true value than the one-step-ahead estimators. This estimator is eventually consistent, but it may need a very large sample to become one. This means that this estimator may result in values of parameters very close to zero even if they are not really needed for the data. I personally would advise using this thing on large samples (for instance, on high frequency data). By the way, Nikos Kourentzes, Rebecca Killick and I are working on a paper on that topic, so stay tuned. Here’s what happens when we use this estimator: N1823 and ETS(A,A,N) with MSEh As you can see, the smoothing parameters are now equal to zero, which gives us the straight line going through all the data. If we had 1008 observations instead of 108, the parameters would not be shrunk to zero, because the model would need to adapt to changes in order to minimise the respective cost function. • TMSE – Trace Mean Squared Error: The need for having a specific 5 steps ahead forecast is not common, so it makes sense to work with something that deals with one to h steps ahead: \label{TMSE} \text{TMSE} = \sum_{j=1}^h \frac{1}{T} \sum_{t=1}^T e_{t+j}^2 This estimator is more reasonable than MSE$$_h$$ because it takes into account all the errors from one to h-steps-ahead forecasts. This is a desired behaviour in inventory management, because we are not so much interested in how much we will sell tomorrow or next Monday, but rather how much we will sell starting from tomorrow till the next Monday. However, the variance of forecast errors h-steps-ahead is usually larger than the variance of one-step-ahead errors (because of the increasing uncertainty), which leads to the effect of “masking”: the latter is hidden behind the former. As a result if we use TMSE as the estimator, the final values are seriously influenced by the long term errors than the short term ones (see Taieb and Atiya, 2015 paper). This estimator is not recommended if short-term forecasts are more important than long term ones. Plus, this is still less efficient and more biased estimator than one-step-ahead estimators, with slow convergence to the true values, similar to MSE$$_h$$, but slightly better. This is what happens in our example: N1823 and ETS(A,N,N) with TMSE Comparing the model estimated using TMSE with the same one estimated using MSE and MSE$$_h$$, it is worth noting that the smoothing parameters in this model are greater than in case of MSE$$_h$$, but less than in case of MSE. This demonstrates that there is a shrinkage effect in TMSE, forcing the parameters towards zero, but the inclusion of one-step-ahead errors makes model slightly more flexible than in case of MSE$$_h$$. Still, it is advised to use this estimator on large samples, where the estimates of parameters become more efficient and less biased. • GTMSE – Geometric Trace Mean Squared Error: This is similar to TMSE, but derived from the so called Trace Forecast Likelihood (which I may discuss at some point in one of the future posts). The idea here is to take logarithms of each MSE$$_j$$ and then sum them up: \label{eq:GTMSE} \text{GTMSE} = \sum_{j=1}^h \log \left( \frac{1}{T} \sum_{t=1}^T e_{t+j}^2 \right) Logarithms make variances of errors on several steps ahead closer to each other. For example, if the variance of one-step-ahead error is equal to 100 and the variance of 10 steps ahead error is equal to 1000, then their logarithms will be 4.6 and 6.9 respectively. As a result when GTMSE is used as an estimator, the model will take into account both short and long term errors. So this is a more balanced estimator of parameters than MSE$$_h$$ and TMSE. This estimator is more efficient than both TMSE and MSE$$_j$$ because of the log-scale and converges to true values faster than the previous two, but still can be biased on small samples. N1823 and ETS(A,A,N) with GTMSE In our example GTMSE shrinks both smoothing parameters towards zero and makes the model deterministic, which corresponds to MSE$$_h$$. However the initial values are slightly different, which leads to slightly different forecasts. Once again, it is advised to use this estimator on large samples. Keep in mind that all those multiple steps ahead estimators take more time for the calculation, because the model needs to produce h-steps-ahead forecasts from each observation in sample. • Analytical multiple steps ahead estimators. There is also a non-documented feature in smooth functions (currently available only for pure additive models) – analytical versions of multiple steps ahead estimators. In order to use it, we need to add “a” in front of the desired estimator: aMSE$$_h$$, aTMSE, aGTMSE. In this case only one-step-ahead forecast error is produced and after that the structure of the applied state-space model is used in order to reconstruct multiple steps ahead estimators. This feature is useful if you want to use the multiple steps ahead estimator on small samples, where the multi-steps errors cannot be calculated appropriately. It is also useful in cases of large samples, when the time of estimation is important. These estimators have similar properties to their empirical counterparts, but work faster and are based on asymptotic properties. Here is an example of analytical MSE$$_h$$ estimator: N1823 and ETS(A,A,N) with aMSEh The resulting smoothing parameters are shrunk towards zero, similar to MSE$$_h$$, but the initial values are slightly different, which leads to different forecasts. Note that the time elapsed in this case is 0.11 seconds instead of 0.24 as in MSE$$_h$$. The difference in time may increase with the increase of sample size and forecasting horizon. • Similar to MSE, there are empirical multi-step MAE and HAM in smooth functions (e.g. MAE$$_h$$ and THAM). However, they are currently implemented mainly “because I can” and for fun, so I cannot give you any recommendations about them. • Starting from v2.4.0, a new estimator was introduced, “Mean Squared Cumulative Error” – MSCE, which may be useful in cases, when the cumulative demand is of the interest rather than point or trace ones. ### Conclusions Now that we discussed all the possible estimators that you can use with smooth, you are most probably confused and completely lost. The question that may naturally appear after you have read this post is “What should I do?” Frankly speaking, I cannot give you appropriate answer and a set of universal recommendations, because this is still under-researched problem. However, I have some advice. First, Nikos Kourentzes and Juan Ramon Trapero found that in case of high frequency data (they used solar irradiation data) using MSE$$_h$$ and TMSE leads to the increase in forecasting accuracy in comparison with the conventional MSE. However in order to achieve good accuracy in case of MSE$$_h$$, you need to estimate $$h$$ separate models, while with TMSE you need to estimate only one. So, TMSE is faster than MSE$$_h$$, but at the same time leads to at least as accurate forecasts as in case of MSE$$_h$$ for all the steps from 1 to h. Second, if you have asymmetrically distributed residuals in the model after using MSE, give MAE or HAM a try – they may improve your model and its accuracy. Third, analytical counterparts of multi-step estimators can be useful in one of the two situations: 1. When you deal with very large samples (e.g. high frequency data), want to use advanced estimation methods, but want them to work fast. 2. When you work with small sample, but want to use the properties of these estimators anyway. Finally, don’t use MSE$$_h$$, TMSE and GTMSE if you are interested in the values of parameters of models – they will almost surely be inefficient and biased. This applies to both ETS and ARIMA models, which will become close to their deterministic counterparts in this case. Use conventional MSE instead.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405588269233704, "perplexity": 615.3913469745623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00452.warc.gz"}
http://arufu.net/alf-no-ikikata/50xuqdwz3jbzctv1prf8ir4kzmjq7h
Metabones vs Fotodiox Canon EF to NEX E adaptors Below you can see a quick RAW to JPEG conversion (but otherwise untouched) of the sample shots from my test with the Canon 135mm F2 L lens. (Unfortunately the Canon 70-200mm did not stop full open when changing lens like the 135mm lens did) Both shots where from a tripod, and the distance from the tripod to the subject did not move in between shots. The Metabones speedbooster show quite a bit of vingetting, as previously observed in this article here. But otherwise a nice image quality. With the Metabones Speedbooster which is a focal length reducer the lens in first reduced by 0.71 and then multiplied at the Nex crop sensor by 1.5 making the 135mm into a 143mm f1.4 (but shot at f2 for comparison with the other adaptor) Unfortunately the Fotodiox adaptor is passive, and so the Exif data is not carried over. But as it is passive it also ensures that the lens was shot wide open at f2. And with only the passive Fotodiox adaptor and the Nex crop sensor the lens becomes a 135mm x 1.5 = 202mm f2 Canon L lens. And with barely any vingetting.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285177946090698, "perplexity": 4737.114396666644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00489.warc.gz"}
http://self.gutenberg.org/articles/eng/Central_limit_theorem
#jsDisabledContent { display:none; } My Account | Register | Help # Central limit theorem Article Id: WHEBN0000039406 Reproduction Date: Title: Central limit theorem Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Central limit theorem In probability theory, the central limit theorem (CLT) states that, given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined expected value and well-defined variance, will be approximately normally distributed, regardless of the underlying distribution.[1][2] That is, suppose that a sample is obtained containing a large number of observations, each observation being randomly generated in a way that does not depend on the values of the other observations, and that the arithmetic average of the observed values is computed. If this procedure is performed many times, the central limit theorem says that the computed values of the average will be distributed according to the normal distribution (commonly known as a "bell curve"). The central limit theorem has a number of variants. In its common form, the random variables must be identically distributed. In variants, convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations, given that they comply with certain conditions. In more general probability theory, a central limit theorem is any of a set of weak-convergence theorems. They all express the fact that a sum of many independent and identically distributed (i.i.d.) random variables, or alternatively, random variables with specific types of dependence, will tend to be distributed according to one of a small set of attractor distributions. When the variance of the i.i.d. variables is finite, the attractor distribution is the normal distribution. In contrast, the sum of a number of i.i.d. random variables with power law tail distributions decreasing as |x|−α−1 where 0 < α < 2 (and therefore having infinite variance) will tend to an alpha-stable distribution with stability parameter (or index of stability) of α as the number of variables grows.[3] ## Contents • Central limit theorems for independent sequences 1 • Classical CLT 1.1 • Lyapunov CLT 1.2 • Lindeberg CLT 1.3 • Multidimensional CLT 1.4 • Central limit theorems for dependent processes 2 • CLT under weak dependence 2.1 • Martingale difference CLT 2.2 • Remarks 3 • Proof of classical CLT 3.1 • Convergence to the limit 3.2 • Relation to the law of large numbers 3.3 • Alternative statements of the theorem 3.4 • Density functions 3.4.1 • Characteristic functions 3.4.2 • Extensions to the theorem 4 • Products of positive random variables 4.1 • Beyond the classical framework 5 • Convex body 5.1 • Lacunary trigonometric series 5.2 • Gaussian polytopes 5.3 • Linear functions of orthogonal matrices 5.4 • Subsequences 5.5 • Tsallis statistics 5.6 • Random walk on a crystal lattice 5.7 • Applications and examples 6 • Simple example 6.1 • Real applications 6.2 • Regression 7 • Other illustrations 7.1 • History 8 • Notes 10 • References 11 ## Central limit theorems for independent sequences A distribution being "smoothed out" by summation, showing original density of distribution and three subsequent summations; see Illustration of the central limit theorem for further details. ### Classical CLT Let {X1, ..., Xn} be a random sample of size n — that is, a sequence of independent and identically distributed random variables drawn from distributions of expected values given by µ and finite variances given by σ2. Suppose we are interested in the sample average S_n := \frac{X_1+\cdots+X_n}{n} of these random variables. By the law of large numbers, the sample averages converge in probability and almost surely to the expected value µ as n → ∞. The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number µ during this convergence. More precisely, it states that as n gets larger, the distribution of the difference between the sample average Sn and its limit µ, when multiplied by the factor n (that is n(Sn − µ)), approximates the normal distribution with mean 0 and variance σ2. For large enough n, the distribution of Sn is close to the normal distribution with mean µ and variance σ2/n. The usefulness of the theorem is that the distribution of n(Sn − µ) approaches normality regardless of the shape of the distribution of the individual Xi’s. Formally, the theorem can be stated as follows: Lindeberg–Lévy CLT. Suppose {X1, X2, ...} is a sequence of i.i.d. random variables with E[Xi] = µ and Var[Xi] = σ2 < ∞. Then as n approaches infinity, the random variables n(Sn − µ) converge in distribution to a normal N(0, σ2):[4] \sqrt{n}\bigg(\bigg(\frac{1}{n}\sum_{i=1}^n X_i\bigg) - \mu\bigg)\ \xrightarrow{d}\ N(0,\;\sigma^2). In the case σ > 0, convergence in distribution means that the cumulative distribution functions of n(Sn − µ) converge pointwise to the cdf of the N(0, σ2) distribution: for every real number z, \lim_{n\to\infty} \Pr[\sqrt{n}(S_n-\mu) \le z] = \Phi(z/\sigma), where Φ(x) is the standard normal cdf evaluated at x. Note that the convergence is uniform in z in the sense that \lim_{n\to\infty}\sup_{z\in{\mathbf R}}\bigl|\Pr[\sqrt{n}(S_n-\mu) \le z] - \Phi(z/\sigma)\bigr| = 0, where sup denotes the least upper bound (or supremum) of the set.[5] ### Lyapunov CLT The theorem is named after Russian mathematician Aleksandr Lyapunov. In this variant of the central limit theorem the random variables Xi have to be independent, but not necessarily identically distributed. The theorem also requires that random variables |Xi| have moments of some order (2 + δ), and that the rate of growth of these moments is limited by the Lyapunov condition given below. Lyapunov CLT.[6] Suppose {X1, X2, ...} is a sequence of independent random variables, each with finite expected value μi and variance σ 2 i . Define s_n^2 = \sum_{i=1}^n \sigma_i^2 If for some δ > 0, the Lyapunov’s condition \lim_{n\to\infty} \frac{1}{s_{n}^{2+\delta}} \sum_{i=1}^{n} \operatorname{E}\big[\,|X_{i} - \mu_{i}|^{2+\delta}\,\big] = 0 is satisfied, then a sum of (Xi − μi)/sn converges in distribution to a standard normal random variable, as n goes to infinity: \frac{1}{s_n} \sum_{i=1}^{n} (X_i - \mu_i) \ \xrightarrow{d}\ \mathcal{N}(0,\;1). In practice it is usually easiest to check the Lyapunov’s condition for δ = 1. If a sequence of random variables satisfies Lyapunov’s condition, then it also satisfies Lindeberg’s condition. The converse implication, however, does not hold. ### Lindeberg CLT In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from Lindeberg in 1920). Suppose that for every ε > 0 \lim_{n \to \infty} \frac{1}{s_n^2}\sum_{i = 1}^{n} \operatorname{E}\big[(X_i - \mu_i)^2 \cdot \mathbf{1}_{\{ | X_i - \mu_i | > \varepsilon s_n \}} \big] = 0 where 1{...} is the indicator function. Then the distribution of the standardized sums \frac{1}{s_n}\sum_{i = 1}^n \left( X_i - \mu_i \right) converges towards the standard normal distribution N(0,1). ### Multidimensional CLT Proofs that use characteristic functions can be extended to cases where each individual Xi is a random vector in Rk, with mean vector μ = E(Xi) and covariance matrix Σ (amongst the components of the vector), and these random vectors are independent and identically distributed. Summation of these vectors is being done componentwise. The multidimensional central limit theorem states that when scaled, sums converge to a multivariate normal distribution.[7] Let \mathbf{X_i}=\begin{bmatrix} X_{i(1)} \\ \vdots \\ X_{i(k)} \end{bmatrix} be the k-vector. The bold in Xi means that it is a random vector, not a random (univariate) variable. Then the sum of the random vectors will be \begin{bmatrix} X_{1(1)} \\ \vdots \\ X_{1(k)} \end{bmatrix}+\begin{bmatrix} X_{2(1)} \\ \vdots \\ X_{2(k)} \end{bmatrix}+\cdots+\begin{bmatrix} X_{n(1)} \\ \vdots \\ X_{n(k)} \end{bmatrix} = \begin{bmatrix} \sum_{i=1}^{n} \left [ X_{i(1)} \right ] \\ \vdots \\ \sum_{i=1}^{n} \left [ X_{i(k)} \right ] \end{bmatrix} = \sum_{i=1}^{n} \mathbf{X_i} and the average is \frac{1}{n} \sum_{i=1}^{n} \mathbf{X_i}= \frac{1}{n}\begin{bmatrix} \sum_{i=1}^{n} X_{i(1)} \\ \vdots \\ \sum_{i=1}^{n} X_{i(k)} \end{bmatrix} = \begin{bmatrix} \bar X_{i(1)} \\ \vdots \\ \bar X_{i(k)} \end{bmatrix}=\mathbf{\bar X_n} and therefore \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \left [\mathbf{X_i} - E\left ( X_i\right ) \right ]=\frac{1}{\sqrt{n}}\sum_{i=1}^{n} ( \mathbf{X_i} - \mu ) = \sqrt{n}\left(\mathbf{\overline{X}}_n - \mu\right) . The multivariate central limit theorem states that \sqrt{n}\left(\mathbf{\overline{X}}_n - \mu\right)\ \stackrel{D}{\rightarrow}\ \mathcal{N}_k(0,\Sigma) where the covariance matrix Σ is equal to \Sigma=\begin{bmatrix} {Var \left (X_{1(1)} \right)} & {Cov \left (X_{1(1)},X_{1(2)} \right)} & Cov \left (X_{1(1)},X_{1(3)} \right) & \cdots & Cov \left (X_{1(1)},X_{1(k)} \right) \\ {Cov \left (X_{1(2)},X_{1(1)} \right)} & {Var \left (X_{1(2)} \right)} & {Cov \left(X_{1(2)},X_{1(3)} \right)} & \cdots & Cov \left(X_{1(2)},X_{1(k)} \right) \\ Cov \left (X_{1(3)},X_{1(1)} \right) & {Cov \left (X_{1(3)},X_{1(2)} \right)} & Var \left (X_{1(3)} \right) & \cdots & Cov \left (X_{1(3)},X_{1(k)} \right) \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ Cov \left (X_{1(k)},X_{1(1)} \right) & Cov \left (X_{1(k)},X_{1(2)} \right) & Cov \left (X_{1(k)},X_{1(3)} \right) & \cdots & Var \left (X_{1(k)} \right) \\ \end{bmatrix}. ## Central limit theorems for dependent processes ### CLT under weak dependence A useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by α(n) → 0 where α(n) is so-called strong mixing coefficient. A simplified formulation of the central limit theorem under strong mixing is:[8] Theorem. Suppose that X1, X2, ... is stationary and α-mixing with αn = O(n−5) and that E(Xn) = 0 and E(Xn2) < ∞. Denote Sn = X1 + ... + Xn, then the limit \sigma^2 = \lim_n \frac{E(S_n^2)}{n} exists, and if σ ≠ 0 then S_n / (\sigma \sqrt n) converges in distribution to N(0, 1). In fact, \sigma^2 = E(X_1^2) + 2 \sum_{k=1}^{\infty} E(X_1 X_{1+k}), where the series converges absolutely. The assumption σ ≠ 0 cannot be omitted, since the asymptotic normality fails for Xn = YnYn−1 where Yn are another stationary sequence. There is a stronger version of the theorem:[9] the assumption E(Xn12) < ∞ is replaced with E(|Xn|2 + δ) < ∞, and the assumption αn = O(n−5) is replaced with \sum_n \alpha_n^{\frac\delta{2(2+\delta)}} < \infty. Existence of such δ > 0 ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see (Bradley 2005). ### Martingale difference CLT Theorem. Let a martingale Mn satisfy • \frac1n \sum_{k=1}^n \mathrm{E} ((M_k-M_{k-1})^2 | M_1,\dots,M_{k-1}) \to 1 in probability as n tends to infinity, • for every ε > 0, \frac1n \sum_{k=1}^n \mathrm{E} \Big( (M_k-M_{k-1})^2; |M_k-M_{k-1}| > \varepsilon \sqrt n \Big) \to 0 as n tends to infinity, then M_n / \sqrt n converges in distribution to N(0,1) as n → ∞.[10][11] Caution: The restricted expectation E(X; A) should not be confused with the conditional expectation E(X|A) = E(X; A)/P(A). ## Remarks ### Proof of classical CLT For a theorem of such fundamental importance to statistics and applied probability, the central limit theorem has a remarkably simple proof using characteristic functions. It is similar to the proof of a (weak) law of large numbers. For any random variable, Y, with zero mean and a unit variance (var(Y) = 1), the characteristic function of Y is, by Taylor's theorem, \varphi_Y(t) = 1 - {t^2 \over 2} + o(t^2), \quad t \rightarrow 0 where o (t2) is "little o notation" for some function of t that goes to zero more rapidly than t2. Letting Yi be (Xi − μ)/σ, the standardized value of Xi, it is easy to see that the standardized mean of the observations X1, X2, ..., Xn is Z_n = \frac{n\overline{X}_n-n\mu}{\sigma \sqrt{n}} =\sum_{i=1}^n {Y_i \over \sqrt{n}} By simple properties of characteristic functions, the characteristic function of the sum is: \varphi_{Z_n} =\varphi_{\sum_{i=1}^n {Y_i \over \sqrt{n}}}\left(t\right) = \varphi_{Y_1} \left(t / \sqrt{n} \right) \cdot \varphi_{Y_2} \left(t / \sqrt{n} \right)\cdots \varphi_{Y_n} \left(t / \sqrt{n} \right) = \left[\varphi_Y\left({t \over \sqrt{n}}\right)\right]^n so that, by the limit of the exponential function ( ex= lim(1+x/n)n) the characteristic function of Zn is \left[\varphi_Y\left({t \over \sqrt{n}}\right)\right]^n = \left[ 1 - {t^2 \over 2n} + o\left({t^2 \over n}\right) \right]^n \, \rightarrow \, e^{-t^2/2}, \quad n \rightarrow \infty. But this limit is just the characteristic function of a standard normal distribution N(0, 1), and the central limit theorem follows from the Lévy continuity theorem, which confirms that the convergence of characteristic functions implies convergence in distribution. ### Convergence to the limit The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. If the third central moment E((X1 − μ)3) exists and is finite, then the above convergence is uniform and the speed of convergence is at least on the order of 1/n1/2 (see Berry-Esseen theorem). Stein's method[12] can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics.[13] The convergence to the normal distribution is monotonic, in the sense that the entropy of Zn increases monotonically to that of the normal distribution.[14] The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables. A sum of discrete random variables is still a discrete random variable, so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution). This means that if we build a histogram of the realisations of the sum of n independent identical discrete variables, the curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity, this relation is known as de Moivre–Laplace theorem. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values. ### Relation to the law of large numbers The law of large numbers as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behaviour of Sn as n approaches infinity?" In mathematical analysis, asymptotic series are one of the most popular tools employed to approach such questions. Suppose we have an asymptotic expansion of f(n): f(n)= a_1 \varphi_{1}(n)+a_2 \varphi_{2}(n)+O(\varphi_{3}(n)) \qquad (n \rightarrow \infty). Dividing both parts by φ1(n) and taking the limit will produce a1, the coefficient of the highest-order term in the expansion, which represents the rate at which f(n) changes in its leading term. \lim_{n\to\infty}\frac{f(n)}{\varphi_{1}(n)}=a_1. Informally, one can say: "f(n) grows approximately as a1 φ1(n)". Taking the difference between f(n) and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about f(n): \lim_{n\to\infty}\frac{f(n)-a_1 \varphi_{1}(n)}{\varphi_{2}(n)}=a_2 . Here one can say that the difference between the function and its approximation grows approximately as a2 φ2(n). The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself. Informally, something along these lines is happening when the sum, Sn, of independent identically distributed random variables, X1, ..., Xn, is studied in classical probability theory. If each Xi has finite mean μ, then by the law of large numbers, Sn/n → μ.[15] If in addition each Xi has finite variance σ2, then by the central limit theorem, \frac{S_n-n\mu}{\sqrt{n}} \rightarrow \xi , where ξ is distributed as N(0, σ2). This provides values of the first two constants in the informal expansion S_n \approx \mu n+\xi \sqrt{n}. \, In the case where the Xi's do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors: \frac{S_n-a_n}{b_n} \rightarrow \Xi, or informally S_n \approx a_n+\Xi b_n. \, Distributions Ξ which can arise in this way are called stable.[16] Clearly, the normal distribution is stable, but there are also other stable distributions, such as the Cauchy distribution, for which the mean or variance are not defined. The scaling factor bn may be proportional to nc, for any c ≥ 1/2; it may also be multiplied by a slowly varying function of n.[17][18] The law of the iterated logarithm specifies what is happening "in between" the law of large numbers and the central limit theorem. Specifically it says that the normalizing function \sqrt{n\log\log n} intermediate in size between n of the law of large numbers and √n of the central limit theorem provides a non-trivial limiting behavior. ### Alternative statements of the theorem #### Density functions The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See,[19] Chapter 7 for a particular local limit theorem for sums of i.i.d. random variables. #### Characteristic functions Since the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. However, to state this more precisely, an appropriate scaling factor needs to be applied to the argument of the characteristic function. An equivalent statement can be made about Fourier transforms, since the characteristic function is essentially a Fourier transform. ## Extensions to the theorem ### Products of positive random variables The logarithm of a product is simply the sum of the logarithms of the factors. Therefore when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different random factors, so they follow a log-normal distribution. Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable.[20] ## Beyond the classical framework Asymptotic normality, that is, convergence to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now. ### Convex body Theorem. There exists a sequence εn ↓ 0 for which the following holds. Let n ≥ 1, and let random variables X1, ..., Xn have a log-concave joint density f such that f(x1, ..., xn) = f(|x1|, ..., |xn|) for all x1, ..., xn, and E(Xk2) = 1 for all k = 1, ..., n. Then the distribution of \frac{X_1+\cdots+X_n}{\sqrt n} is εn-close to N(0, 1) in the total variation distance.[21] These two εn-close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: f(x1, …, xn) = const · exp( − (|x1|α + … + |xn|α)β) where α > 1 and αβ > 1. If β = 1 then f(x1, …, xn) factorizes into const · exp ( − |x1|α)…exp( − |xn|α), which means independence of X1, …, Xn. In general, however, they are dependent. The condition f(x1, …, xn) = f(|x1|, …, |xn|) ensures that X1, …, Xn are of zero mean and uncorrelated; still, they need not be independent, nor even pairwise independent. By the way, pairwise independence cannot replace independence in the classical central limit theorem.[22] Here is a Berry-Esseen type result. Theorem. Let X1, …, Xn satisfy the assumptions of the previous theorem, then [23] \bigg| \mathbb{P} \Big( a \le \frac{ X_1+\dots+X_n }{ \sqrt n } \le b \Big) - \frac1{\sqrt{2\pi}} \int_a^b \mathrm{e}^{-t^2/2} \, \mathrm{d} t \bigg| \le \frac C n for all a < b; here C is a universal (absolute) constant. Moreover, for every c1, …, cnR such that c12 + … + cn2 = 1, \bigg| \mathbb{P} ( a \le c_1 X_1+\dots+c_n X_n \le b ) - \frac1{\sqrt{2\pi}} \int_a^b \mathrm{e}^{-t^2/2} \, \mathrm{d} t \bigg| \le C ( c_1^4+\dots+c_n^4 ). The distribution of (X_1+\dots+X_n)/\sqrt n need not be approximately normal (in fact, it can be uniform).[24] However, the distribution of c1X1 + … + cnXn is close to N(0, 1) (in the total variation distance) for most of vectors (c1, …, cn) according to the uniform distribution on the sphere c12 + … + cn2 = 1. ### Lacunary trigonometric series Theorem (SalemZygmund). Let U be a random variable distributed uniformly on (0, 2π), and Xk = rk cos(nkU + ak), where • nk satisfy the lacunarity condition: there exists q > 1 such that nk+1qnk for all k, • rk are such that r_1^2 + r_2^2 + \cdots = \infty \text{ and } \frac{ r_k^2 }{ r_1^2+\cdots+r_k^2 } \to 0, • 0 ≤ ak < 2π. Then[25][26] \frac{ X_1+\cdots+X_k }{ \sqrt{r_1^2+\cdots+r_k^2} } converges in distribution to N(0, 1/2). ### Gaussian polytopes Theorem Let A1, ..., An be independent random points on the plane R2 each having the two-dimensional standard normal distribution. Let Kn be the convex hull of these points, and Xn the area of Kn Then[27] \frac{ X_n - \mathrm{E} X_n }{ \sqrt{\operatorname{Var} X_n} } converges in distribution to N(0, 1) as n tends to infinity. The same holds in all dimensions (2, 3, ...). The polytope Kn is called Gaussian random polytope. A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions.[28] ### Linear functions of orthogonal matrices A linear function of a matrix M is a linear combination of its elements (with given coefficients), M ↦ tr(AM) where A is the matrix of the coefficients; see Trace (linear algebra)#Inner product. A random orthogonal matrix is said to be distributed uniformly, if its distribution is the normalized Haar measure on the orthogonal group O(n, R); see Rotation matrix#Uniform random rotation matrices. Theorem. Let M be a random orthogonal n × n matrix distributed uniformly, and A a fixed n × n matrix such that tr(AA*) = n, and let X = tr(AM). Then[29] the distribution of X is close to N(0, 1) in the total variation metric up to 23/(n−1). ### Subsequences Theorem. Let random variables X1, X2, … ∈ L2(Ω) be such that Xn → 0 weakly in L2(Ω) and Xn2 → 1 weakly in L1(Ω). Then there exist integers n1 < n2 < … such that ( X_{n_1}+\cdots+X_{n_k} ) / \sqrt k converges in distribution to N(0, 1) as k tends to infinity.[30] ### Tsallis statistics A generalization of the classical central limit theorem to the context of Tsallis statistics has been described by Umarov, Tsallis and Steinberg[31] in which the independence constraint for the i.i.d. variables is relaxed to an extent defined by the q parameter, with independence being recovered as q->1. In analogy to the classical central limit theorem, such random variables with fixed mean and variance tend towards the q-Gaussian distribution, which maximizes the Tsallis entropy under these constraints. Umarov, Tsallis, Gell-Mann and Steinberg have defined similar generalizations of all symmetric alpha-stable distributions, and have formulated a number of conjectures regarding their relevance to an even more general Central limit theorem.[32] ### Random walk on a crystal lattice The central limit theorem may be established for the simple random walk on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures. [33][34] ## Applications and examples ### Simple example Comparison of probability density functions, p(k) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing n, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). A simple example of the central limit theorem is rolling a large number of identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample statistics to the normal distribution in controlled experiments. This figure demonstrates the central limit theorem. The sample means are generated using a random number generator, which draws numbers between 1 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result in the 500 measured sample means being more closely distributed about the population mean (50 in this case). It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the chi-squared values that quantify the goodness of the fit (the fit is good if the reduced chi-squared value is less than or approximately equal to one). The input into the normalized Gaussian function is the mean of sample means (~50) and the mean sample standard deviation divided by the square root of the sample size (~28.87/n), which is called the standard deviation of the mean (since it refers to the spread of sample means). ### Real applications A histogram plot of monthly accidental deaths in the US, between 1973 and 1978 exhibits normality, due to the central limit theorem Published literature contains a number of useful and interesting examples and applications relating to the central limit theorem.[35] One source[36] states the following examples: • The probability distribution for total distance covered in a random walk (biased or unbiased) will tend toward a normal distribution. • Flipping a large number of coins will result in a normal distribution for the total number of heads (or equivalently total number of tails). From another viewpoint, the central limit theorem explains the common appearance of the "Bell Curve" in density estimates applied to real world data. In cases like electronic noise, examination grades, and so on, we can often regard a single measured value as the weighted average of a large number of small effects. Using generalisations of the central limit theorem, we can then see that this would often (though not always) produce a final distribution that is approximately normal. In general, the more a measurement is like the sum of independent variables with equal influence on the result, the more normality it exhibits. This justifies the common use of this distribution to stand in for the effects of unobserved variables in models like the linear model. ## Regression Regression analysis and in particular ordinary least squares specifies that a dependent variable depends according to some function upon one or more independent variables, with an additive error term. Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of a large number of independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be assumed to be normally distributed. ### Other illustrations Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem.[37] ## History Tijms writes:[38] The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematician Pierre-Simon Laplace rescued it from obscurity in his monumental work Théorie Analytique des Probabilités, which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician Aleksandr Lyapunov defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory. Sir Francis Galton described the Central Limit Theorem as:[39] I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshaled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along. The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by • Hazewinkel, Michiel, ed. (2001), "Central limit theorem", • Animated examples of the CLT • Central Limit Theorem interactive simulation to experiment with various parameters • CLT in NetLogo (Connected Probability — ProbLab) interactive simulation w/ a variety of modifiable parameters • General Central Limit Theorem Activity & corresponding SOCR CLT Applet (Select the Sampling Distribution CLT Experiment from the drop-down list of SOCR Experiments) • Generate sampling distributions in Excel Specify arbitrary population, sample size, and sample statistic. • MIT OpenCourseWare Lecture 18.440 Probability and Random Variables, Spring 2011, Scott Sheffield Another proof. Retrieved 2012-04-08. • CAUSEweb.org is a site with many resources for teaching statistics including the Central Limit Theorem • The Central Limit Theorem by Chris Boucher, Wolfram Demonstrations Project. • Weisstein, Eric W., "Central Limit Theorem", MathWorld. • Animations for the Central Limit Theorem by Yihui Xie using the R package animation • Teaching demonstrations of the CLT: clt.examp function in Greg Snow (2012). TeachingDemos: Demonstrations for teaching and learning. R package version 2.8..
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942505359649658, "perplexity": 815.985594966888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00222.warc.gz"}
https://www.originlab.com/doc/en/Origin-Help/MSmooth-Dialog
# 18.9.1 The Matrix Smoothing Dialog Box Input Matrix Specify the matrix that contains the input data. The number of column cells to be averaged†. The number of row cells to be averaged†. Specify the destination matrix to output the smoothed data. †For information on methods, see these topics:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730265498161316, "perplexity": 2697.074508746705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00756.warc.gz"}
https://opencourseware.online/books/UG110/s14-correlation-and-regression.html
# Chapter 10 Correlation and Regression Our interest in this chapter is in situations in which we can associate to each element of a population or sample two measurements x and y, particularly in the case that it is of interest to use the value of x to predict the value of y. For example, the population could be the air in automobile garages, x could be the electrical current produced by an electrochemical reaction taking place in a carbon monoxide meter, and y the concentration of carbon monoxide in the air. In this chapter we will learn statistical methods for analyzing the relationship between variables x and y in this context. A list of all the formulas that appear anywhere in this chapter are collected in the last section for ease of reference. ### Learning Objective 1. To learn what it means for two variables to exhibit a relationship that is close to linear but which contains an element of randomness. The following table gives examples of the kinds of pairs of variables which could be of interest from a statistical point of view. x y Predictor or independent variable Response or dependent variable Temperature in degrees Celsius Temperature in degrees Fahrenheit Area of a house (sq.ft.) Value of the house Age of a particular make and model car Resale value of the car Height of a 25-year-old man Weight of the man The first line in the table is different from all the rest because in that case and no other the relationship between the variables is deterministic: once the value of x is known the value of y is completely determined. In fact there is a formula for y in terms of x: $y=95x+32.$ Choosing several values for x and computing the corresponding value for y for each one using the formula gives the table $x−40−1502050y−4053268122$ We can plot these data by choosing a pair of perpendicular lines in the plane, called the coordinate axes, as shown in Figure 10.1 "Plot of Celsius and Fahrenheit Temperature Pairs". Then to each pair of numbers in the table we associate a unique point in the plane, the point that lies x units to the right of the vertical axis (to the left if $x<0$) and y units above the horizontal axis (below if $y<0$). The relationship between x and y is called a linear relationship because the points so plotted all lie on a single straight line. The number $95$ in the equation $y=95x+32$ is the slope of the line, and measures its steepness. It describes how y changes in response to a change in x: if x increases by 1 unit then y increases (since $95$ is positive) by $95$ unit. If the slope had been negative then y would have decreased in response to an increase in x. The number 32 in the formula $y=95x+32$ is the y-intercept of the line; it identifies where the line crosses the y-axis. You may recall from an earlier course that every non-vertical line in the plane is described by an equation of the form $y=mx+b$, where m is the slope of the line and b is its y-intercept. Figure 10.1 Plot of Celsius and Fahrenheit Temperature Pairs The relationship between x and y in the temperature example is deterministic because once the value of x is known, the value of y is completely determined. In contrast, all the other relationships listed in the table above have an element of randomness in them. Consider the relationship described in the last line of the table, the height x of a man aged 25 and his weight y. If we were to randomly select several 25-year-old men and measure the height and weight of each one, we might obtain a collection of $(x,y)$ pairs something like this: $(68,151)(69,146)(70,157)(70,164)(71,171)(72,160)(72,163)(72,180)(73,170)(73,175)(74,178)(75,188)$ A plot of these data is shown in Figure 10.2 "Plot of Height and Weight Pairs". Such a plot is called a scatter diagram or scatter plot. Looking at the plot it is evident that there exists a linear relationship between height x and weight y, but not a perfect one. The points appear to be following a line, but not exactly. There is an element of randomness present. Figure 10.2 Plot of Height and Weight Pairs In this chapter we will analyze situations in which variables x and y exhibit such a linear relationship with randomness. The level of randomness will vary from situation to situation. In the introductory example connecting an electric current and the level of carbon monoxide in air, the relationship is almost perfect. In other situations, such as the height and weights of individuals, the connection between the two variables involves a high degree of randomness. In the next section we will see how to quantify the strength of the linear relationship between two variables. ### Key Takeaways • Two variables x and y have a deterministic linear relationship if points plotted from $(x,y)$ pairs lie exactly along a single straight line. • In practice it is common for two variables to exhibit a relationship that is close to linear but which contains an element, possibly large, of randomness. ### Basic 1. A line has equation $y=0.5x+2.$ 1. Pick five distinct x-values, use the equation to compute the corresponding y-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the y-intercept. 2. A line has equation $y=x−0.5.$ 1. Pick five distinct x-values, use the equation to compute the corresponding y-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the y-intercept. 3. A line has equation $y=−2x+4.$ 1. Pick five distinct x-values, use the equation to compute the corresponding y-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the y-intercept. 4. A line has equation $y=−1.5x+1.$ 1. Pick five distinct x-values, use the equation to compute the corresponding y-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the y-intercept. 5. Based on the information given about a line, determine how y will change (increase, decrease, or stay the same) when x is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The slope is positive. 2. The y-intercept is positive. 3. The slope is zero. 6. Based on the information given about a line, determine how y will change (increase, decrease, or stay the same) when x is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The y-intercept is negative. 2. The y-intercept is zero. 3. The slope is negative. 7. A data set consists of eight $(x,y)$ pairs of numbers: $(0,12)(4,16)(8,22)(15,28)(2,15)(5,14)(13,24)(20,30)$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between x and y appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between x and y appears to be linear or not linear. 8. A data set consists of ten $(x,y)$ pairs of numbers: $(3,20)(6,9)(11,0)(14,1)(18,9)(5,13)(8,4)(12,0)(17,6)(20,16)$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between x and y appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between x and y appears to be linear or not linear. 9. A data set consists of nine $(x,y)$ pairs of numbers: $(8,16)(10,4)(12,0)(14,4)(16,16)(9,9)(11,1)(13,1)(15,9)$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between x and y appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between x and y appears to be linear or not linear. 10. A data set consists of five $(x,y)$ pairs of numbers: $(0,1) (2,5) (3,7) (5,11) (8,17)$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between x and y appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between x and y appears to be linear or not linear. ### Applications 1. At 60°F a particular blend of automotive gasoline weights 6.17 lb/gal. The weight y of gasoline on a tank truck that is loaded with x gallons of gasoline is given by the linear equation $y=6.17x$ 1. Explain whether the relationship between the weight y and the amount x of gasoline is deterministic or contains an element of randomness. 2. Predict the weight of gasoline on a tank truck that has just been loaded with 6,750 gallons of gasoline. 2. The rate for renting a motor scooter for one day at a beach resort area is $25 plus 30 cents for each mile the scooter is driven. The total cost y in dollars for renting a scooter and driving it x miles is $y=0.30x+25$ 1. Explain whether the relationship between the cost y of renting the scooter for a day and the distance x that the scooter is driven that day is deterministic or contains an element of randomness. 2. A person intends to rent a scooter one day for a trip to an attraction 17 miles away. Assuming that the total distance the scooter is driven is 34 miles, predict the cost of the rental. 3. The pricing schedule for labor on a service call by an elevator repair company is$150 plus $50 per hour on site. 1. Write down the linear equation that relates the labor cost y to the number of hours x that the repairman is on site. 2. Calculate the labor cost for a service call that lasts 2.5 hours. 4. The cost of a telephone call made through a leased line service is 2.5 cents per minute. 1. Write down the linear equation that relates the cost y (in cents) of a call to its length x. 2. Calculate the cost of a call that lasts 23 minutes. ### Large Data Set Exercises 1. Large Data Set 1 lists the SAT scores and GPAs of 1,000 students. Plot the scatter diagram with SAT score as the independent variable (x) and GPA as the dependent variable (y). Comment on the appearance and strength of any linear trend. http://www.gone.books/sites/all/files/data1.xls 2. Large Data Set 12 lists the golf scores on one round of golf for 75 golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Plot the scatter diagram with golf score using the original clubs as the independent variable (x) and golf score using the new clubs as the dependent variable (y). Comment on the appearance and strength of any linear trend. http://www.gone.books/sites/all/files/data12.xls 3. Large Data Set 13 records the number of bidders and sales price of a particular type of antique grandfather clock at 60 auctions. Plot the scatter diagram with the number of bidders at the auction as the independent variable (x) and the sales price as the dependent variable (y). Comment on the appearance and strength of any linear trend. http://www.gone.books/sites/all/files/data13.xls ### Answers 1. Answers vary. 2. Slope $m=0.5$; y-intercept $b=2.$ 1. Answers vary. 2. Slope $m=−2$; y-intercept $b=4.$ 1. y increases. 2. Impossible to tell. 3. y does not change. 1. Scatter diagram needed. 2. Involves randomness. 3. Linear. 1. Scatter diagram needed. 2. Deterministic. 3. Not linear. 1. Deterministic. 2. 41,647.5 pounds. 1. $y=50x+150.$ 2. b.$275. 1. There appears to a hint of some positive correlation. 2. There appears to be clear positive correlation. ### Learning Objective 1. To learn what the linear correlation coefficient is, how to compute it, and what it tells us about the relationship between two variables x and y. Figure 10.3 "Linear Relationships of Varying Strengths" illustrates linear relationships between two variables x and y of varying strengths. It is visually apparent that in the situation in panel (a), x could serve as a useful predictor of y, it would be less useful in the situation illustrated in panel (b), and in the situation of panel (c) the linear relationship is so weak as to be practically nonexistent. The linear correlation coefficient is a number computed directly from the data that measures the strength of the linear relationship between the two variables x and  y. Figure 10.3 Linear Relationships of Varying Strengths ### Definition The linear correlation coefficientA number computed directly from the data that measures the strength of the linear relationship between the two variables x and y. for a collection of n pairs $(x,y)$ of numbers in a sample is the number r given by the formula $r=SSxySSxx·SSyy$ where $SSxx=Σx2−1n(Σx)2, SSxy=Σxy−1n(Σx)(Σy), SSyy=Σy2−1n(Σy)2$ The linear correlation coefficient has the following properties, illustrated in Figure 10.4 "Linear Correlation Coefficient ": 1. The value of r lies between −1 and 1, inclusive. 2. The sign of r indicates the direction of the linear relationship between x and y: 1. If $r<0$ then y tends to decrease as x is increased. 2. If $r>0$ then y tends to increase as x is increased. 3. The size of |r| indicates the strength of the linear relationship between x and y: 1. If |r| is near 1 (that is, if r is near either 1 or −1) then the linear relationship between x and y is strong. 2. If |r| is near 0 (that is, if r is near 0 and of either sign) then the linear relationship between x and y is weak. Figure 10.4 Linear Correlation Coefficient R Pay particular attention to panel (f) in Figure 10.4 "Linear Correlation Coefficient ". It shows a perfectly deterministic relationship between x and y, but $r=0$ because the relationship is not linear. (In this particular case the points lie on the top half of a circle.) ### Example 1 Compute the linear correlation coefficient for the height and weight pairs plotted in Figure 10.2 "Plot of Height and Weight Pairs". Solution: Even for small data sets like this one computations are too long to do completely by hand. In actual practice the data are entered into a calculator or computer and a statistics program is used. In order to clarify the meaning of the formulas we will display the data and related quantities in tabular form. For each $(x,y)$ pair we compute three numbers: x2, $xy$, and y2, as shown in the table provided. In the last line of the table we have the sum of the numbers in each column. Using them we compute: x y x2 $xy$ y2 68 151 4624 10268 22801 69 146 4761 10074 21316 70 157 4900 10990 24649 70 164 4900 11480 26896 71 171 5041 12141 29241 72 160 5184 11520 25600 72 163 5184 11736 26569 72 180 5184 12960 32400 73 170 5329 12410 28900 73 175 5329 12775 30625 74 178 5476 13172 31684 75 188 5625 14100 35344 Σ 859 2003 61537 143626 336025 $SSxx=Σx2−1n(Σx)2=61537−112(859)2=46.916-SSxy=Σxy−1n(Σx)(Σy)=143626−112(859)(2003)=244.583-SSyy=Σy2−1n(Σy)2=336025−112(2003)2=1690.916-$ so that The number $r=0.868$ quantifies what is visually apparent from Figure 10.2 "Plot of Height and Weight Pairs": weights tends to increase linearly with height (r is positive) and although the relationship is not perfect, it is reasonably strong (r is near 1). ### Key Takeaways • The linear correlation coefficient measures the strength and direction of the linear relationship between two variables x and y. • The sign of the linear correlation coefficient indicates the direction of the linear relationship between x and y. • When r is near 1 or −1 the linear relationship is strong; when it is near 0 the linear relationship is weak. ### Basic With the exception of the exercises at the end of Section 10.3 "Modelling Linear Relationships with Randomness Present", the first Basic exercise in each of the following sections through Section 10.7 "Estimation and Prediction" uses the data from the first exercise here, the second Basic exercise uses the data from the second exercise here, and so on, and similarly for the Application exercises. Save your computations done on these exercises so that you do not need to repeat them later. 1. For the sample data $x01358y24659$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 2. For the sample data $x02369y03348$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 3. For the sample data $x13468y413−10$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 4. For the sample data $x12479y556−30$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 5. For the sample data $x11345y21534$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 6. For the sample data $x13558y5−22−1−3$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 7. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=5Σ​​x=25Σ​​x2=165Σ​​y=24Σ​​y2=134Σ​​xy=1441≤x≤9$ 8. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=5Σ​​x=31Σ​​x2=253Σ​​y=18Σ​​y2=90Σ​​xy=1482≤x≤12$ 9. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=10Σ​​x=0Σ​​x2=60Σ​​y=24Σ​​y2=234Σ​​xy=−87−4≤x≤4$ 10. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=10Σ​​x=−3Σ​​x2=263Σ​​y=55Σ​​y2=917Σ​​xy=−355−10≤x≤10$ ### Applications 1. The age x in months and vocabulary y were measured for six children, with the results shown in the table. $x131415161618y81015202730$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 2. The curb weight x in hundreds of pounds and braking distance y in feet, at 50 miles per hour on dry pavement, were measured for five vehicles, with the results shown in the table. $x2527.532.53545y105125140140150$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 3. The age x and resting heart rate y were measured for ten men, with the results shown in the table. $x2023303735y7271737474$ $x4551556063y7372797577$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 4. The wind speed x in miles per hour and wave height y in feet were measured under various conditions on an enclosed deep water sea, with the results shown in the table, $x00277y2.00.00.30.73.3$ $x913202231y4.94.93.06.95.9$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 5. The advertising expenditure x and sales y in thousands of dollars for a small retail business in its first eight years in operation are shown in the table. $x1.41.61.62.0y180184190220$ $x2.02.22.42.6y186215205240$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 6. The height x at age 2 and y at age 20, both in inches, for ten women are tabulated in the table. $x31.331.732.533.534.4y60.761.063.164.265.9$ $x35.235.832.733.634.8y68.267.662.364.966.8$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 7. The course average x just before a final exam and the score y on the final exam were recorded for 15 randomly selected students in a large physics class, with the results shown in the table. $x69.387.750.551.982.7y5689554961$ $x70.572.491.783.386.5y6672837382$ $x79.378.575.752.362.2y9280641876$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 8. The table shows the acres x of corn planted and acres y of corn harvested, in millions of acres, in a particular country in ten successive years. $x75.778.978.680.981.8y68.869.370.973.675.1$ $x78.393.585.986.488.2y70.686.578.679.581.4$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 9. Fifty male subjects drank a measured amount x (in ounces) of a medication and the concentration y (in percent) in their blood of the active ingredient was measured 30 minutes later. The sample data are summarized by the following information. $n=50Σ​x=112.5Σ​y=4.83Σxy=15.2550≤x≤4.5Σx2=356.25Σy2=0.667$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 10. In an effort to produce a formula for estimating the age of large free-standing oak trees non-invasively, the girth x (in inches) five feet off the ground of 15 such trees of known age y (in years) was measured. The sample data are summarized by the following information. $n=15Σ​x=3368Σ​y=6496Σ​xy=1,933,219Σ​x2=917,780Σ​y2=4,260,66674≤x≤395$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 11. Construction standards specify the strength of concrete 28 days after it is poured. For 30 samples of various types of concrete the strength x after 3 days and the strength y after 28 days (both in hundreds of pounds per square inch) were measured. The sample data are summarized by the following information. $n=30Σ​x=501.6Σ​y=1338.8Σ​xy=23,246.55Σ​x2=8724.74Σ​y2=61,980.1411≤x≤22$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 12. Power-generating facilities used forecasts of temperature to forecast energy demand. The average temperature x (degrees Fahrenheit) and the day’s energy demand y (million watt-hours) were recorded on 40 randomly selected winter days in the region served by a power company. The sample data are summarized by the following information. $n=40Σ​x=2000Σ​y=2969Σ​xy=143,042Σ​x2=101,340Σ​y2=243,02740≤x≤60$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 1. In each case state whether you expect the two variables x and y indicated to have positive, negative, or zero correlation. 1. the number x of pages in a book and the age y of the author 2. the number x of pages in a book and the age y of the intended reader 3. the weight x of an automobile and the fuel economy y in miles per gallon 4. the weight x of an automobile and the reading y on its odometer 5. the amount x of a sedative a person took an hour ago and the time y it takes him to respond to a stimulus 2. In each case state whether you expect the two variables x and y indicated to have positive, negative, or zero correlation. 1. the length x of time an emergency flare will burn and the length y of time the match used to light it burned 2. the average length x of time that calls to a retail call center are on hold one day and the number y of calls received that day 3. the length x of a regularly scheduled commercial flight between two cities and the headwind y encountered by the aircraft 4. the value x of a house and the its size y in square feet 5. the average temperature x on a winter day and the energy consumption y of the furnace 3. Changing the units of measurement on two variables x and y should not change the linear correlation coefficient. Moreover, most change of units amount to simply multiplying one unit by the other (for example, 1 foot = 12 inches). Multiply each x value in the table in Exercise 1 by two and compute the linear correlation coefficient for the new data set. Compare the new value of r to the one for the original data. 4. Refer to the previous exercise. Multiply each x value in the table in Exercise 2 by two, multiply each y value by three, and compute the linear correlation coefficient for the new data set. Compare the new value of r to the one for the original data. 5. Reversing the roles of x and y in the data set of Exercise 1 produces the data set $x24659y01358$ Compute the linear correlation coefficient of the new set of data and compare it to what you got in Exercise 1. 6. In the context of the previous problem, look at the formula for r and see if you can tell why what you observed there must be true for every data set. ### Large Data Set Exercises 1. Large Data Set 1 lists the SAT scores and GPAs of 1,000 students. Compute the linear correlation coefficient r. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the first large data set problem for Section 10.1 "Linear Relationships Between Variables". http://www.gone.books/sites/all/files/data1.xls 2. Large Data Set 12 lists the golf scores on one round of golf for 75 golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the linear correlation coefficient r. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the second large data set problem for Section 10.1 "Linear Relationships Between Variables". http://www.gone.books/sites/all/files/data12.xls 3. Large Data Set 13 records the number of bidders and sales price of a particular type of antique grandfather clock at 60 auctions. Compute the linear correlation coefficient r. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the third large data set problem for Section 10.1 "Linear Relationships Between Variables". http://www.gone.books/sites/all/files/data13.xls 1. $r=0.921$ 2. $r=−0.794$ 3. $r=0.707$ 4. 0.875 5. −0.846 1. 0.948 2. 0.709 3. 0.832 4. 0.751 5. 0.965 6. 0.992 1. zero 2. positive 3. negative 4. zero 5. positive 1. same value 2. same value 1. $r=0.4601$ 2. $r=0.9002$ ### Learning Objective 1. To learn the framework in which the statistical analysis of the linear relationship between two variables x and y will be done. In this chapter we are dealing with a population for which we can associate to each element two measurements, x and y. We are interested in situations in which the value of x can be used to draw conclusions about the value of y, such as predicting the resale value y of a residential house based on its size x. Since the relationship between x and y is not deterministic, statistical procedures must be applied. For any statistical procedures, given in this book or elsewhere, the associated formulas are valid only under specific assumptions. The set of assumptions in simple linear regression are a mathematical description of the relationship between x and y. Such a set of assumptions is known as a model. For each fixed value of x a sub-population of the full population is determined, such as the collection of all houses with 2,100 square feet of living space. For each element of that sub-population there is a measurement y, such as the value of any 2,100-square-foot house. Let $E(y)$ denote the mean of all the y-values for each particular value of x. $E(y)$ can change from x-value to x-value, such as the mean value of all 2,100-square-foot houses, the (different) mean value for all 2,500-square foot-houses, and so on. Our first assumption is that the relationship between x and the mean of the y-values in the sub-population determined by x is linear. This means that there exist numbers $β1$ and $β0$ such that $E(y)=β1x+β0$ This linear relationship is the reason for the word “linear” in “simple linear regression” below. (The word “simple” means that y depends on only one other variable and not two or more.) Our next assumption is that for each value of x the y-values scatter about the mean $E(y)$ according to a normal distribution centered at $E(y)$ and with a standard deviation σ that is the same for every value of x. This is the same as saying that there exists a normally distributed random variable ε with mean 0 and standard deviation σ so that the relationship between x and y in the whole population is $y=β1x+β0+ε$ Our last assumption is that the random deviations associated with different observations are independent. In summary, the model is: ### Simple Linear Regression Model For each point $(x,y)$ in data set the y-value is an independent observation of $y=β1x+β0+ε$ where $β1$ and $β0$ are fixed parameters and ε is a normally distributed random variable with mean 0 and an unknown standard deviation σ. The line with equation $y=β1x+β0$ is called the population regression lineThe line with equation $y=β1x+β0$ that gives the mean of the variable y over the sub-population determined by x.. Figure 10.5 "The Simple Linear Model Concept" illustrates the model. The symbols $N(μ,σ2)$ denote a normal distribution with mean μ and variance $σ2$, hence standard deviation σ. Figure 10.5 The Simple Linear Model Concept It is conceptually important to view the model as a sum of two parts: $y=β1x+β0+ε$ 1. Deterministic Part. The first part $β1x+β0$ is the equation that describes the trend in y as x increases. The line that we seem to see when we look at the scatter diagram is an approximation of the line $y=β1x+β0.$ There is nothing random in this part, and therefore it is called the deterministic part of the model. 2. Random Part. The second part ε is a random variable, often called the error term or the noise. This part explains why the actual observed values of y are not exactly on but fluctuate near a line. Information about this term is important since only when one knows how much noise there is in the data can one know how trustworthy the detected trend is. There are three parameters in this model: $β0$, $β1$, and σ. Each has an important interpretation, particularly $β1$ and σ. The slope parameter $β1$ represents the expected change in y brought about by a unit increase in x. The standard deviation σ represents the magnitude of the noise in the data. There are procedures for checking the validity of the three assumptions, but for us it will be sufficient to visually verify the linear trend in the data. If the data set is large then the points in the scatter diagram will form a band about an apparent straight line. The normality of ε with a constant standard deviation corresponds graphically to the band being of roughly constant width, and with most points concentrated near the middle of the band. Fortunately, the three assumptions do not need to hold exactly in order for the procedures and analysis developed in this chapter to be useful. ### Exercises 1. State the three assumptions that are the basis for the Simple Linear Regression Model. 2. The Simple Linear Regression Model is summarized by the equation $y=β1x+β0+ε$ Identify the deterministic part and the random part. 3. Is the number $β1$ in the equation $y=β1x+β0$ a statistic or a population parameter? Explain. 4. Is the number σ in the Simple Linear Regression Model a statistic or a population parameter? Explain. 5. Describe what to look for in a scatter diagram in order to check that the assumptions of the Simple Linear Regression Model are true. 6. True or false: the assumptions of the Simple Linear Regression Model must hold exactly in order for the procedures and analysis developed in this chapter to be useful. 1. The mean of y is linearly related to x. 2. For each given x, y is a normal random variable with mean $β1x+β0$ and standard deviation σ. 3. All the observations of y in the sample are independent. 1. $β1$ is a population parameter. 2. A linear trend. ### Learning Objectives 1. To learn how to measure how well a straight line fits a collection of data. 2. To learn how to construct the least squares regression line, the straight line that best fits a collection of data. 3. To learn the meaning of the slope of the least squares regression line. 4. To learn how to use the least squares regression line to estimate the response variable y in terms of the predictor variable x. ## Goodness of Fit of a Straight Line to Data Once the scatter diagram of the data has been drawn and the model assumptions described in the previous sections at least visually verified (and perhaps the correlation coefficient r computed to quantitatively verify the linear trend), the next step in the analysis is to find the straight line that best fits the data. We will explain how to measure how well a straight line fits a collection of points by examining how well the line $y=12x−1$ fits the data set $x226810y01233$ (which will be used as a running example for the next three sections). We will write the equation of this line as $y^=12x−1$ with an accent on the y to indicate that the y-values computed using this equation are not from the data. We will do this with all lines approximating data sets. The line $y^=12x−1$ was selected as one that seems to fit the data reasonably well. The idea for measuring the goodness of fit of a straight line to data is illustrated in Figure 10.6 "Plot of the Five-Point Data and the Line ", in which the graph of the line $y^=12x−1$ has been superimposed on the scatter plot for the sample data set. Figure 10.6 Plot of the Five-Point Data and the Line $y^=12x−1$ To each point in the data set there is associated an “errorUsing $y−y^$, the actual y-value of a data point minus the y-value that is computed from the equation of the line fitting the data.,” the positive or negative vertical distance from the point to the line: positive if the point is above the line and negative if it is below the line. The error can be computed as the actual y-value of the point minus the y-value $y^$ that is “predicted” by inserting the x-value of the data point into the formula for the line: $error at data point (x,y)=(true y)−(predicted y)=y−y^$ The computation of the error for each of the five points in the data set is shown in Table 10.1 "The Errors in Fitting Data with a Straight Line". Table 10.1 The Errors in Fitting Data with a Straight Line x y $y^=12x−1$ $y−y^$ $(y−y^)2$ 2 0 0 0 0 2 1 0 1 1 6 2 2 0 0 8 3 3 0 0 10 3 4 −1 1 Σ - - - 0 2 A first thought for a measure of the goodness of fit of the line to the data would be simply to add the errors at every point, but the example shows that this cannot work well in general. The line does not fit the data perfectly (no line can), yet because of cancellation of positive and negative errors the sum of the errors (the fourth column of numbers) is zero. Instead goodness of fit is measured by the sum of the squares of the errors. Squaring eliminates the minus signs, so no cancellation can occur. For the data and line in Figure 10.6 "Plot of the Five-Point Data and the Line " the sum of the squared errors (the last column of numbers) is 2. This number measures the goodness of fit of the line to the data. ### Definition The goodness of fit of a line $y^=mx+b$ to a set of n pairs $(x,y)$ of numbers in a sample is the sum of the squared errors $Σ(y−y^)2$ (n terms in the sum, one for each data pair). ## The Least Squares Regression Line Given any collection of pairs of numbers (except when all the x-values are the same) and the corresponding scatter diagram, there always exists exactly one straight line that fits the data better than any other, in the sense of minimizing the sum of the squared errors. It is called the least squares regression line. Moreover there are formulas for its slope and y-intercept. ### Definition Given a collection of pairs $(x,y)$ of numbers (in which not all the x-values are the same), there is a line $y^=β^1x+β^0$ that best fits the data in the sense of minimizing the sum of the squared errors. It is called the least squares regression lineThe line that best fits a set of sample data in the sense of minimizing the sum of the squared errors.. Its slope $β^1$ and y-intercept $β^0$ are computed using the formulas $β^1=SSxySSxx and β^0=y-−β^1x-$ where $SSxx=Σx2−1n(Σx)2, SSxy=Σxy−1n(Σx)(Σy)$ $x-$ is the mean of all the x-values, $y-$ is the mean of all the y-values, and n is the number of pairs in the data set. The equation $y^=β^1x+β^0$ specifying the least squares regression line is called the least squares regression equationThe equation $y^=β^1x+β^0$ of the least squares regression line.. Remember from Section 10.3 "Modelling Linear Relationships with Randomness Present" that the line with the equation $y=β1x+β0$ is called the population regression line. The numbers $β^1$ and $β^0$ are statistics that estimate the population parameters $β1$ and $β0.$ We will compute the least squares regression line for the five-point data set, then for a more practical example that will be another running example for the introduction of new concepts in this and the next three sections. ### Example 2 Find the least squares regression line for the five-point data set $x226810y01233$ and verify that it fits the data better than the line $y^=12x−1$ considered in Section 10.4.1 "Goodness of Fit of a Straight Line to Data". Solution: In actual practice computation of the regression line is done using a statistical computation package. In order to clarify the meaning of the formulas we display the computations in tabular form. x y x2 $xy$ 2 0 4 0 2 1 4 2 6 2 36 12 8 3 64 24 10 3 100 30 Σ 28 9 208 68 In the last line of the table we have the sum of the numbers in each column. Using them we compute: $SSxx=Σ​x2−1n(Σ​x)2=208−15(28)2=51.2SSxy=Σ​xy−1n(Σ​x)(Σ​y)=68−15(28)(9)=17.6x-=Σ​xn=285=5.6y-=Σ​yn=95=1.8$ so that $β^1=SSxySSxx=17.651.2=0.34375 and β^0=y-−β^1x-=1.8−(0.34375)(5.6)=−0.125$ The least squares regression line for these data is $y^=0.34375x−0.125$ The computations for measuring how well it fits the sample data are given in Table 10.2 "The Errors in Fitting Data with the Least Squares Regression Line". The sum of the squared errors is the sum of the numbers in the last column, which is 0.75. It is less than 2, the sum of the squared errors for the fit of the line $y^=12x−1$ to this data set. Table 10.2 The Errors in Fitting Data with the Least Squares Regression Line x y $y^=0.34375x−0.125$ $y−y^$ $(y−y^)2$ 2 0 0.5625 −0.5625 0.31640625 2 1 0.5625 0.4375 0.19140625 6 2 1.9375 0.0625 0.00390625 8 3 2.6250 0.3750 0.14062500 10 3 3.3125 −0.3125 0.09765625 ### Example 3 Table 10.3 "Data on Age and Value of Used Automobiles of a Specific Make and Model" shows the age in years and the retail value in thousands of dollars of a random sample of ten automobiles of the same make and model. 1. Construct the scatter diagram. 2. Compute the linear correlation coefficient r. Interpret its value in the context of the problem. 3. Compute the least squares regression line. Plot it on the scatter diagram. 4. Interpret the meaning of the slope of the least squares regression line in the context of the problem. 5. Suppose a four-year-old automobile of this make and model is selected at random. Use the regression equation to predict its retail value. 6. Suppose a 20-year-old automobile of this make and model is selected at random. Use the regression equation to predict its retail value. Interpret the result. 7. Comment on the validity of using the regression equation to predict the price of a brand new automobile of this make and model. Table 10.3 Data on Age and Value of Used Automobiles of a Specific Make and Model x 2 3 3 3 4 4 5 5 5 6 y 28.7 24.8 26 30.5 23.8 24.6 23.8 20.4 21.6 22.1 Solution: 1. The scatter diagram is shown in Figure 10.7 "Scatter Diagram for Age and Value of Used Automobiles". Figure 10.7 Scatter Diagram for Age and Value of Used Automobiles 1. We must first compute $SSxx$, $SSxy$, $SSyy$, which means computing $Σx$, $Σy$, $Σx2$, $Σy2$, and $Σxy.$ Using a computing device we obtain $Σ​x=40 Σ​y=246.3 Σ​x2=174 Σ​y2=6154.15 Σ​xy=956.5$ Thus $SSxx=Σ​x2−1n(Σ​x)2=174−110(40)2=14SSxy=Σ​xy−1n(Σ​x)(Σ​y)=956.5−110(40)(246.3)=−28.7SSyy=Σ​y2−1n(Σ​y)2=6154.15−110(246.3)2=87.781$ so that $r=SSxySSxx·SSyy=−28.7(14)(87.781)=−0.819$ The age and value of this make and model automobile are moderately strongly negatively correlated. As the age increases, the value of the automobile tends to decrease. 2. Using the values of $Σx$ and $Σy$ computed in part (b), $x-=Σxn=4010=4 and y-=Σyn=246.310=24.63$ Thus using the values of $SSxx$ and $SSxy$ from part (b), $β^1=SSxySSxx=−28.714=−2.05 and β^0=y-−β^1x-=24.63−(−2.05)(4)=32.83$ The equation $y^=β^1x+β^0$ of the least squares regression line for these sample data is $y^=−2.05x+32.83$ Figure 10.8 "Scatter Diagram and Regression Line for Age and Value of Used Automobiles" shows the scatter diagram with the graph of the least squares regression line superimposed. Figure 10.8 Scatter Diagram and Regression Line for Age and Value of Used Automobiles 1. The slope −2.05 means that for each unit increase in x (additional year of age) the average value of this make and model vehicle decreases by about 2.05 units (about $2,050). 2. Since we know nothing about the automobile other than its age, we assume that it is of about average value and use the average value of all four-year-old vehicles of this make and model as our estimate. The average value is simply the value of $y^$ obtained when the number 4 is inserted for x in the least squares regression equation: $y^=−2.05(4)+32.83=24.63$ which corresponds to$24,630. 3. Now we insert $x=20$ into the least squares regression equation, to obtain $y^=−2.05(20)+32.83=−8.17$ which corresponds to −$8,170. Something is wrong here, since a negative makes no sense. The error arose from applying the regression equation to a value of x not in the range of x-values in the original data, from two to six years. Applying the regression equation $y^=β^1x+β^0$ to a value of x outside the range of x-values in the data set is called extrapolation. It is an invalid use of the regression equation and should be avoided. 4. The price of a brand new vehicle of this make and model is the value of the automobile at age 0. If the value $x=0$ is inserted into the regression equation the result is always $β^0$, the y-intercept, in this case 32.83, which corresponds to$32,830. But this is a case of extrapolation, just as part (f) was, hence this result is invalid, although not obviously so. In the context of the problem, since automobiles tend to lose value much more quickly immediately after they are purchased than they do after they are several years old, the number $32,830 is probably an underestimate of the price of a new automobile of this make and model. For emphasis we highlight the points raised by parts (f) and (g) of the example. ### Definition The process of using the least squares regression equation to estimate the value of y at a value of x that does not lie in the range of the x-values in the data set that was used to form the regression line is called extrapolationThe process of using the least squares regression equation to estimate the value of y at an x value not in the proper range.. It is an invalid use of the regression equation that can lead to errors, hence should be avoided. ## The Sum of the Squared Errors $SSE$ In general, in order to measure the goodness of fit of a line to a set of data, we must compute the predicted y-value $y^$ at every point in the data set, compute each error, square it, and then add up all the squares. In the case of the least squares regression line, however, the line that best fits the data, the sum of the squared errors can be computed directly from the data using the following formula. The sum of the squared errors for the least squares regression line is denoted by $SSE.$ It can be computed using the formula $SSE=SSyy−β^1SSxy$ ### Example 4 Find the sum of the squared errors $SSE$ for the least squares regression line for the five-point data set $x226810y01233$ Do so in two ways: 1. using the definition $Σ(y−y^)2$; 2. using the formula $SSE=SSyy−β^1SSxy.$ Solution: 1. The least squares regression line was computed in Note 10.18 "Example 2" and is $y^=0.34375x−0.125.$ $SSE$ was found at the end of that example using the definition $Σ(y−y^)2.$ The computations were tabulated in Table 10.2 "The Errors in Fitting Data with the Least Squares Regression Line". $SSE$ is the sum of the numbers in the last column, which is 0.75. 2. The numbers $SSxy$ and $β^1$ were already computed in Note 10.18 "Example 2" in the process of finding the least squares regression line. So was the number $Σy=9.$ We must compute $SSyy.$ To do so it is necessary to first compute $Σy2=0+12+22+32+32=23.$ Then $SSyy=Σy2−1n(Σy)2=23−15(9)2=6.8$ so that $SSE=SSyy−β^1SSxy=6.8−(0.34375)(17.6)=0.75$ ### Example 5 Find the sum of the squared errors $SSE$ for the least squares regression line for the data set, presented in Table 10.3 "Data on Age and Value of Used Automobiles of a Specific Make and Model", on age and values of used vehicles in Note 10.19 "Example 3". Solution: From Note 10.19 "Example 3" we already know that $SSxy=−28.7, β^1=−2.05, and Σy=246.3$ To compute $SSyy$ we first compute $Σy2=28.72+24.82+26.02+30.52+23.82+24.62+23.82+20.42+21.62+22.12=6154.15$ Then $SSyy=Σy2−1n(Σy)2=6154.15−110(246.3)2=87.781$ Therefore $SSE=SSyy−β^1SSxy=87.781−(−2.05)(−28.7)=28.946$ ### Key Takeaways • How well a straight line fits a data set is measured by the sum of the squared errors. • The least squares regression line is the line that best fits the data. Its slope and y-intercept are computed from the data using formulas. • The slope $β^1$ of the least squares regression line estimates the size and direction of the mean change in the dependent variable y when the independent variable x is increased by one unit. • The sum of the squared errors $SSE$ of the least squares regression line can be computed using a formula, without having to compute all the individual errors. ### Exercises ### Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2 "The Linear Correlation Coefficient". 1. Compute the least squares regression line for the data in Exercise 1 of Section 10.2 "The Linear Correlation Coefficient". 2. Compute the least squares regression line for the data in Exercise 2 of Section 10.2 "The Linear Correlation Coefficient". 3. Compute the least squares regression line for the data in Exercise 3 of Section 10.2 "The Linear Correlation Coefficient". 4. Compute the least squares regression line for the data in Exercise 4 of Section 10.2 "The Linear Correlation Coefficient". 5. For the data in Exercise 5 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Compute the sum of the squared errors $SSE$ using the definition $Σ(y−y^)2.$ 3. Compute the sum of the squared errors $SSE$ using the formula $SSE=SSyy−β^1SSxy.$ 6. For the data in Exercise 6 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Compute the sum of the squared errors $SSE$ using the definition $Σ(y−y^)2.$ 3. Compute the sum of the squared errors $SSE$ using the formula $SSE=SSyy−β^1SSxy.$ 7. Compute the least squares regression line for the data in Exercise 7 of Section 10.2 "The Linear Correlation Coefficient". 8. Compute the least squares regression line for the data in Exercise 8 of Section 10.2 "The Linear Correlation Coefficient". 9. For the data in Exercise 9 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors $SSE$ using the definition $Σ(y−y^)2$? Explain. 3. Compute the sum of the squared errors $SSE$ using the formula $SSE=SSyy−β^1SSxy.$ 10. For the data in Exercise 10 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors $SSE$ using the definition $Σ(y−y^)2$? Explain. 3. Compute the sum of the squared errors $SSE$ using the formula $SSE=SSyy−β^1SSxy.$ ### Applications 1. For the data in Exercise 11 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. On average, how many new words does a child from 13 to 18 months old learn each month? Explain. 3. Estimate the average vocabulary of all 16-month-old children. 2. For the data in Exercise 12 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. On average, how many additional feet are added to the braking distance for each additional 100 pounds of weight? Explain. 3. Estimate the average braking distance of all cars weighing 3,000 pounds. 3. For the data in Exercise 13 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Estimate the average resting heart rate of all 40-year-old men. 3. Estimate the average resting heart rate of all newborn baby boys. Comment on the validity of the estimate. 4. For the data in Exercise 14 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Estimate the average wave height when the wind is blowing at 10 miles per hour. 3. Estimate the average wave height when there is no wind blowing. Comment on the validity of the estimate. 5. For the data in Exercise 15 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. On average, for each additional thousand dollars spent on advertising, how does revenue change? Explain. 3. Estimate the revenue if$2,500 is spent on advertising next year. 6. For the data in Exercise 16 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. On average, for each additional inch of height of two-year-old girl, what is the change in the adult height? Explain. 3. Predict the adult height of a two-year-old girl who is 33 inches tall. 7. For the data in Exercise 17 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Compute $SSE$ using the formula $SSE=SSyy−β^1SSxy.$ 3. Estimate the average final exam score of all students whose course average just before the exam is 85. 8. For the data in Exercise 18 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Compute $SSE$ using the formula $SSE=SSyy−β^1SSxy.$ 3. Estimate the number of acres that would be harvested if 90 million acres of corn were planted. 9. For the data in Exercise 19 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the average concentration of the active ingredient in the blood in men after consuming 1 ounce of the medication. 10. For the data in Exercise 20 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the age of an oak tree whose girth five feet off the ground is 92 inches. 11. For the data in Exercise 21 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. The 28-day strength of concrete used on a certain job must be at least 3,200 psi. If the 3-day strength is 1,300 psi, would we anticipate that the concrete will be sufficiently strong on the 28th day? Explain fully. 12. For the data in Exercise 22 of Section 10.2 "The Linear Correlation Coefficient" 1. Compute the least squares regression line. 2. If the power facility is called upon to provide more than 95 million watt-hours tomorrow then energy will have to be purchased from elsewhere at a premium. The forecast is for an average temperature of 42 degrees. Should the company plan on purchasing power at a premium? 1. Verify that no matter what the data are, the least squares regression line always passes through the point with coordinates $(x-,y-).$ Hint: Find the predicted value of y when $x=x-.$ 2. In Exercise 1 you computed the least squares regression line for the data in Exercise 1 of Section 10.2 "The Linear Correlation Coefficient". 1. Reverse the roles of x and y and compute the least squares regression line for the new data set $x24659y01358$ 2. Interchanging x and y corresponds geometrically to reflecting the scatter plot in a 45-degree line. Reflecting the regression line for the original data the same way gives a line with the equation $y^=1.346x−3.600.$ Is this the equation that you got in part (a)? Can you figure out why not? Hint: Think about how x and y are treated differently geometrically in the computation of the goodness of fit. 3. Compute $SSE$ for each line and see if they fit the same, or if one fits the data better than the other. ### Large Data Set Exercises 1. Large Data Set 1 lists the SAT scores and GPAs of 1,000 students. http://www.gone.books/sites/all/files/data1.xls 1. Compute the least squares regression line with SAT score as the independent variable (x) and GPA as the dependent variable (y). 2. Interpret the meaning of the slope $β^1$ of regression line in the context of problem. 3. Compute $SSE$, the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the GPA of a student whose SAT score is 1350. 2. Large Data Set 12 lists the golf scores on one round of golf for 75 golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). http://www.gone.books/sites/all/files/data12.xls 1. Compute the least squares regression line with scores using the original clubs as the independent variable (x) and scores using the new clubs as the dependent variable (y). 2. Interpret the meaning of the slope $β^1$ of regression line in the context of problem. 3. Compute $SSE$, the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the score with the new clubs of a golfer whose score with the old clubs is 73. 3. Large Data Set 13 records the number of bidders and sales price of a particular type of antique grandfather clock at 60 auctions. http://www.gone.books/sites/all/files/data13.xls 1. Compute the least squares regression line with the number of bidders present at the auction as the independent variable (x) and sales price as the dependent variable (y). 2. Interpret the meaning of the slope $β^1$ of regression line in the context of problem. 3. Compute $SSE$, the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the sales price of a clock at an auction at which the number of bidders is seven. 1. $y^=0.743x+2.675$ 2. $y^=−0.610x+4.082$ 3. $y^=0.625x+1.25$, $SSE=5$ 4. $y^=0.6x+1.8$ 5. $y^=−1.45x+2.4$, $SSE=50.25$ (cannot use the definition to compute) 1. $y^=4.848x−56$, 2. 4.8, 3. 21.6 1. $y^=0.114x+69.222$, 2. 73.8, 3. 69.2, invalid extrapolation 1. $y^=42.024x+119.502$, 2. increases by $42,024, 3.$224,562 1. $y^=1.045x−8.527$, 2. 2151.93367, 3. 80.3 1. $y^=0.043x+0.001$, 2. For each additional ounce of medication consumed blood concentration of the active ingredient increases by 0.043 %, 3. 0.044% 1. $y^=2.550x+1.993$, 2. Predicted 28-day strength is 3,514 psi; sufficiently strong 1. $y^=0.0016x+0.022$ 2. On average, every 100 point increase in SAT score adds 0.16 point to the GPA. 3. $SSE=432.10$ 4. $y^=2.182$ 1. $y^=116.62x+6955.1$ 2. On average, every 1 additional bidder at an auction raises the price by 116.62 dollars. 3. $SSE=1850314.08$ 4. $y^=7771.44$ ### Learning Objectives 1. To learn how to construct a confidence interval for $β1$, the slope of the population regression line. 2. To learn how to test hypotheses regarding $β1.$ The parameter $β1$, the slope of the population regression line, is of primary importance in regression analysis because it gives the true rate of change in the mean $E(y)$ in response to a unit increase in the predictor variable x. For every unit increase in x the mean of the response variable y changes by $β1$ units, increasing if $β1>0$ and decreasing if $β1<0.$ We wish to construct confidence intervals for $β1$ and test hypotheses about it. ## Confidence Intervals for β1 The slope $β^1$ of the least squares regression line is a point estimate of $β1.$ A confidence interval for $β1$ is given by the following formula. ### $100(1−α)%$ Confidence Interval for the Slope $β1$ of the Population Regression Line $β^1±tα∕2sεSSxx$ where $sε=SSEn−2$ and the number of degrees of freedom is $df=n−2.$ The assumptions listed in Section 10.3 "Modelling Linear Relationships with Randomness Present" must hold. ### Definition The statistic $sε$ is called the sample standard deviation of errorsThe statistic $sε$.. It estimates the standard deviation σ of the errors in the population of y-values for each fixed value of x (see Figure 10.5 "The Simple Linear Model Concept" in Section 10.3 "Modelling Linear Relationships with Randomness Present"). ### Example 6 Construct the 95% confidence interval for the slope $β1$ of the population regression line based on the five-point sample data set $x226810y01233$ Solution: The point estimate $β^1$ of $β1$ was computed in Note 10.18 "Example 2" in Section 10.4 "The Least Squares Regression Line" as $β^1=0.34375.$ In the same example $SSxx$ was found to be $SSxx=51.2.$ The sum of the squared errors $SSE$ was computed in Note 10.23 "Example 4" in Section 10.4 "The Least Squares Regression Line" as $SSE=0.75.$ Thus $sε=SSEn−2=0.753=0.50$ Confidence level 95% means $α=1−0.95=0.05$ so $α∕2=0.025.$ From the row labeled $df=3$ in Figure 12.3 "Critical Values of " we obtain $t0.025=3.182.$ Therefore $β^1±tα∕2sεSSxx=0.34375±3.1820.5051.2=0.34375±0.2223$ which gives the interval $(0.1215,0.5661).$ We are 95% confident that the slope $β1$ of the population regression line is between 0.1215 and 0.5661. ### Example 7 Using the sample data in Table 10.3 "Data on Age and Value of Used Automobiles of a Specific Make and Model" construct a 90% confidence interval for the slope $β1$ of the population regression line relating age and value of the automobiles of Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line". Interpret the result in the context of the problem. Solution: The point estimate $β^1$ of $β1$ was computed in Note 10.19 "Example 3", as was $SSxx.$ Their values are $β^1=−2.05$ and $SSxx=14.$ The sum of the squared errors $SSE$ was computed in Note 10.24 "Example 5" in Section 10.4 "The Least Squares Regression Line" as $SSE=28.946.$ Thus $sε=SSEn−2=28.9468=1.902169814$ Confidence level 90% means $α=1−0.90=0.10$ so $α∕2=0.05.$ From the row labeled $df=8$ in Figure 12.3 "Critical Values of " we obtain $t0.05=1.860.$ Therefore $β^1±tα∕2sεSSxx=−2.05±1.8601.90216981414=−2.05±0.95$ which gives the interval $(−3.00,−1.10).$ We are 90% confident that the slope $β1$ of the population regression line is between −3.00 and −1.10. In the context of the problem this means that for vehicles of this make and model between two and six years old we are 90% confident that for each additional year of age the average value of such a vehicle decreases by between $1,100 and$3,000. Hypotheses regarding $β1$ can be tested using the same five-step procedures, either the critical value approach or the p-value approach, that were introduced in Section 8.1 "The Elements of Hypothesis Testing" and Section 8.3 "The Observed Significance of a Test" of Chapter 8 "Testing Hypotheses". The null hypothesis always has the form $H0:β1=B0$ where B0 is a number determined from the statement of the problem. The three forms of the alternative hypothesis, with the terminology for each case, are: Form of Ha Terminology $Ha:β1 Left-tailed $Ha:β1>B0$ Right-tailed $Ha:β1≠B0$ Two-tailed The value zero for B0 is of particular importance since in that case the null hypothesis is $H0:β1=0$, which corresponds to the situation in which x is not useful for predicting y. For if $β1=0$ then the population regression line is horizontal, so the mean $E(y)$ is the same for every value of x and we are just as well off in ignoring x completely and approximating y by its average value. Given two variables x and y, the burden of proof is that x is useful for predicting y, not that it is not. Thus the phrase “test whether x is useful for prediction of y,” or words to that effect, means to perform the test $H0:β1=0 vs. Ha:β1≠0$ ### Standardized Test Statistic for Hypothesis Tests Concerning the Slope $β1$ of the Population Regression Line $T=β^1−B0sε∕SSxx$ The test statistic has Student’s t-distribution with $df=n−2$ degrees of freedom. The assumptions listed in Section 10.3 "Modelling Linear Relationships with Randomness Present" must hold. ### Example 8 Test, at the 2% level of significance, whether the variable x is useful for predicting y based on the information in the five-point data set $x226810y01233$ Solution: We will perform the test using the critical value approach. • Step 1. Since x is useful for prediction of y precisely when the slope $β1$ of the population regression line is nonzero, the relevant test is $H0:β1=0vs. Ha:β1≠0 @ α=0.02$ • Step 2. The test statistic is $T=β^1sε∕SSxx$ and has Student’s t-distribution with $n−2=5−2=3$ degrees of freedom. • Step 3. From Note 10.18 "Example 2", $β^1=0.34375$ and $SSxx=51.2.$ From Note 10.30 "Example 6", $sε=0.50.$ The value of the test statistic is therefore $T=β^1−B0sε∕SSxx=0.343750.50∕51.2=4.919$ • Step 4. Since the symbol in Ha is “≠” this is a two-tailed test, so there are two critical values $±tα∕2=±t0.01.$ Reading from the line in Figure 12.3 "Critical Values of " labeled $df=3$, $t0.01=4.541.$ The rejection region is $(−∞,−4.541]∪[4.541,∞).$ • Step 5. As shown in Figure 10.9 "Rejection Region and Test Statistic for " the test statistic falls in the rejection region. The decision is to reject H0. In the context of the problem our conclusion is: The data provide sufficient evidence, at the 2% level of significance, to conclude that the slope of the population regression line is nonzero, so that x is useful as a predictor of y. Figure 10.9 Rejection Region and Test Statistic for Note 10.33 "Example 8" ### Example 9 A car salesman claims that automobiles between two and six years old of the make and model discussed in Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line" lose more than $1,100 in value each year. Test this claim at the 5% level of significance. Solution: We will perform the test using the critical value approach. • Step 1. In terms of the variables x and y, the salesman’s claim is that if x is increased by 1 unit (one additional year in age), then y decreases by more than 1.1 units (more than$1,100). Thus his assertion is that the slope of the population regression line is negative, and that it is more negative than −1.1. In symbols, $β1<−1.1.$ Since it contains an inequality, this has to be the alternative hypotheses. The null hypothesis has to be an equality and have the same number on the right hand side, so the relevant test is $H0:β1=−1.1vs. Hα:β1<−1.1 @ α=0.05$ • Step 2. The test statistic is $T=β^1−B0sε∕SSxx$ and has Student’s t-distribution with 8 degrees of freedom. • Step 3. From Note 10.19 "Example 3", $β^1=−2.05$ and $SSxx=14.$ From Note 10.31 "Example 7", $sε=1.902169814.$ The value of the test statistic is therefore $T=β^1−B0sε∕SSxx=−2.05−(−1.1)1.902169814∕14=−1.869$ • Step 4. Since the symbol in Ha is “<” this is a left-tailed test, so there is a single critical value $−tα=−t0.05.$ Reading from the line in Figure 12.3 "Critical Values of " labeled $df=8$, $t0.05=1.860.$ The rejection region is $(−∞,−1.860].$ • Step 5. As shown in Figure 10.10 "Rejection Region and Test Statistic for " the test statistic falls in the rejection region. The decision is to reject H0. In the context of the problem our conclusion is: The data provide sufficient evidence, at the 5% level of significance, to conclude that vehicles of this make and model and in this age range lose more than $1,100 per year in value, on average. Figure 10.10 Rejection Region and Test Statistic for Note 10.34 "Example 9" ### Key Takeaways • The parameter $β1$, the slope of the population regression line, is of primary interest because it describes the average change in y with respect to unit increase in x. • The statistic $β^1$, the slope of the least squares regression line, is a point estimate of $β1.$ Confidence intervals for $β1$ can be computed using a formula. • Hypotheses regarding $β1$ are tested using the same five-step procedures introduced in Chapter 8 "Testing Hypotheses". ### Exercises ### Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2 "The Linear Correlation Coefficient" and Section 10.4 "The Least Squares Regression Line". 1. Construct the 95% confidence interval for the slope $β1$ of the population regression line based on the sample data set of Exercise 1 of Section 10.2 "The Linear Correlation Coefficient". 2. Construct the 90% confidence interval for the slope $β1$ of the population regression line based on the sample data set of Exercise 2 of Section 10.2 "The Linear Correlation Coefficient". 3. Construct the 90% confidence interval for the slope $β1$ of the population regression line based on the sample data set of Exercise 3 of Section 10.2 "The Linear Correlation Coefficient". 4. Construct the 99% confidence interval for the slope $β1$ of the population regression Exercise 4 of Section 10.2 "The Linear Correlation Coefficient". 5. For the data in Exercise 5 of Section 10.2 "The Linear Correlation Coefficient" test, at the 10% level of significance, whether x is useful for predicting y (that is, whether $β1≠0$). 6. For the data in Exercise 6 of Section 10.2 "The Linear Correlation Coefficient" test, at the 5% level of significance, whether x is useful for predicting y (that is, whether $β1≠0$). 7. Construct the 90% confidence interval for the slope $β1$ of the population regression line based on the sample data set of Exercise 7 of Section 10.2 "The Linear Correlation Coefficient". 8. Construct the 95% confidence interval for the slope $β1$ of the population regression line based on the sample data set of Exercise 8 of Section 10.2 "The Linear Correlation Coefficient". 9. For the data in Exercise 9 of Section 10.2 "The Linear Correlation Coefficient" test, at the 1% level of significance, whether x is useful for predicting y (that is, whether $β1≠0$). 10. For the data in Exercise 10 of Section 10.2 "The Linear Correlation Coefficient" test, at the 1% level of significance, whether x is useful for predicting y (that is, whether $β1≠0$). ### Applications 1. For the data in Exercise 11 of Section 10.2 "The Linear Correlation Coefficient" construct a 90% confidence interval for the mean number of new words acquired per month by children between 13 and 18 months of age. 2. For the data in Exercise 12 of Section 10.2 "The Linear Correlation Coefficient" construct a 90% confidence interval for the mean increased braking distance for each additional 100 pounds of vehicle weight. 3. For the data in Exercise 13 of Section 10.2 "The Linear Correlation Coefficient" test, at the 10% level of significance, whether age is useful for predicting resting heart rate. 4. For the data in Exercise 14 of Section 10.2 "The Linear Correlation Coefficient" test, at the 10% level of significance, whether wind speed is useful for predicting wave height. 5. For the situation described in Exercise 15 of Section 10.2 "The Linear Correlation Coefficient" 1. Construct the 95% confidence interval for the mean increase in revenue per additional thousand dollars spent on advertising. 2. An advertising agency tells the business owner that for every additional thousand dollars spent on advertising, revenue will increase by over$25,000. Test this claim (which is the alternative hypothesis) at the 5% level of significance. 3. Perform the test of part (b) at the 10% level of significance. 4. Based on the results in (b) and (c), how believable is the ad agency’s claim? (This is a subjective judgement.) 6. For the situation described in Exercise 16 of Section 10.2 "The Linear Correlation Coefficient" 1. Construct the 90% confidence interval for the mean increase in height per additional inch of length at age two. 2. It is claimed that for girls each additional inch of length at age two means more than an additional inch of height at maturity. Test this claim (which is the alternative hypothesis) at the 10% level of significance. 7. For the data in Exercise 17 of Section 10.2 "The Linear Correlation Coefficient" test, at the 10% level of significance, whether course average before the final exam is useful for predicting the final exam grade. 8. For the situation described in Exercise 18 of Section 10.2 "The Linear Correlation Coefficient", an agronomist claims that each additional million acres planted results in more than 750,000 additional acres harvested. Test this claim at the 1% level of significance. 9. For the data in Exercise 19 of Section 10.2 "The Linear Correlation Coefficient" test, at the 1/10th of 1% level of significance, whether, ignoring all other facts such as age and body mass, the amount of the medication consumed is a useful predictor of blood concentration of the active ingredient. 10. For the data in Exercise 20 of Section 10.2 "The Linear Correlation Coefficient" test, at the 1% level of significance, whether for each additional inch of girth the age of the tree increases by at least two and one-half years. 11. For the data in Exercise 21 of Section 10.2 "The Linear Correlation Coefficient" 1. Construct the 95% confidence interval for the mean increase in strength at 28 days for each additional hundred psi increase in strength at 3 days. 2. Test, at the 1/10th of 1% level of significance, whether the 3-day strength is useful for predicting 28-day strength. 12. For the situation described in Exercise 22 of Section 10.2 "The Linear Correlation Coefficient" 1. Construct the 99% confidence interval for the mean decrease in energy demand for each one-degree drop in temperature. 2. An engineer with the power company believes that for each one-degree increase in temperature, daily energy demand will decrease by more than 3.6 million watt-hours. Test this claim at the 1% level of significance. ### Large Data Set Exercises 1. Large Data Set 1 lists the SAT scores and GPAs of 1,000 students. http://www.gone.books/sites/all/files/data1.xls 1. Compute the 90% confidence interval for the slope $β1$ of the population regression line with SAT score as the independent variable (x) and GPA as the dependent variable (y). 2. Test, at the 10% level of significance, the hypothesis that the slope of the population regression line is greater than 0.001, against the null hypothesis that it is exactly 0.001. 2. Large Data Set 12 lists the golf scores on one round of golf for 75 golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). http://www.gone.books/sites/all/files/data12.xls 1. Compute the 95% confidence interval for the slope $β1$ of the population regression line with scores using the original clubs as the independent variable (x) and scores using the new clubs as the dependent variable (y). 2. Test, at the 10% level of significance, the hypothesis that the slope of the population regression line is different from 1, against the null hypothesis that it is exactly 1. 3. Large Data Set 13 records the number of bidders and sales price of a particular type of antique grandfather clock at 60 auctions. http://www.gone.books/sites/all/files/data13.xls 1. Compute the 95% confidence interval for the slope $β1$ of the population regression line with the number of bidders present at the auction as the independent variable (x) and sales price as the dependent variable (y). 2. Test, at the 10% level of significance, the hypothesis that the average sales price increases by more than $90 for each additional bidder at an auction, against the default that it increases by exactly$90. 1. $0.743±0.578$ 2. $−0.610±0.633$ 3. $T=1.732$, $±t0.05=±2.353$, do not reject H0 4. $0.6±0.451$ 5. $T=−4.481$, $±t0.005=±3.355$, reject H0 1. $4.8±1.7$ words 2. $T=2.843$, $±t0.05=±1.860$, reject H0 1. $42.024±28.011$ thousand dollars, 2. $T=1.487$, $t0.05=1.943$, do not reject H0; 3. $t0.10=1.440$, reject H0 3. $T=4.096$, $±t0.05=±1.771$, reject H0 4. $T=25.524$, $±t0.0005=±3.505$, reject H0 1. $2.550±0.127$ hundred psi, 2. $T=41.072$, $±t0.005=±3.674$, reject H0 1. $(0.0014,0.0018)$ 2. $H0:β1=0.001$ vs. $Ha:β1>0.001.$ Test Statistic: $Z=6.1625.$ Rejection Region: $[1.28,+∞).$ Decision: Reject H0. 1. $(101.789,131.4435)$ 2. $H0:β1=90$ vs. $Ha:β1>90.$ Test Statistic: $T=3.5938.$ $d.f.=58.$ Rejection Region: $[1.296,+∞).$ Decision: Reject H0. ### Learning Objective 1. To learn what the coefficient of determination is, how to compute it, and what it tells us about the relationship between two variables x and y. If the scatter diagram of a set of $(x,y)$ pairs shows neither an upward or downward trend, then the horizontal line $y^=y-$ fits it well, as illustrated in Figure 10.11. The lack of any upward or downward trend means that when an element of the population is selected at random, knowing the value of the measurement x for that element is not helpful in predicting the value of the measurement y. Figure 10.11 The line $y^=y-$ fits the scatter diagram well. If the scatter diagram shows a linear trend upward or downward then it is useful to compute the least squares regression line $y^=β^1x+β^0$ and use it in predicting y. Figure 10.12 "Same Scatter Diagram with Two Approximating Lines" illustrates this. In each panel we have plotted the height and weight data of Section 10.1 "Linear Relationships Between Variables". This is the same scatter plot as Figure 10.2 "Plot of Height and Weight Pairs", with the average value line $y^=y-$ superimposed on it in the left panel and the least squares regression line imposed on it in the right panel. The errors are indicated graphically by the vertical line segments. Figure 10.12 Same Scatter Diagram with Two Approximating Lines The sum of the squared errors computed for the regression line, $SSE$, is smaller than the sum of the squared errors computed for any other line. In particular it is less than the sum of the squared errors computed using the line $y^=y-$, which sum is actually the number $SSyy$ that we have seen several times already. A measure of how useful it is to use the regression equation for prediction of y is how much smaller $SSE$ is than $SSyy.$ In particular, the proportion of the sum of the squared errors for the line $y^=y-$ that is eliminated by going over to the least squares regression line is $SSyy−SSESSyy=SSyySSyy−SSESSyy=1−SSESSyy$ We can think of $SSE∕SSyy$ as the proportion of the variability in y that cannot be accounted for by the linear relationship between x and y, since it is still there even when x is taken into account in the best way possible (using the least squares regression line; remember that $SSE$ is the smallest the sum of the squared errors can be for any line). Seen in this light, the coefficient of determination, the complementary proportion of the variability in y, is the proportion of the variability in all the y measurements that is accounted for by the linear relationship between x and y. In the context of linear regression the coefficient of determination is always the square of the correlation coefficient r discussed in Section 10.2 "The Linear Correlation Coefficient". Thus the coefficient of determination is denoted r2, and we have two additional formulas for computing it. ### Definition The coefficient of determinationA number that measures the proportion of the variability in y that is explained by x. of a collection of $(x,y)$ pairs is the number r2 computed by any of the following three expressions: $r2=SSyy−SSESSyy=SSxy2SSxxSSyy=β^1SSxySSyy$ It measures the proportion of the variability in y that is accounted for by the linear relationship between x and y. If the correlation coefficient r is already known then the coefficient of determination can be computed simply by squaring r, as the notation indicates, $r2=(r)2.$ ### Example 10 The value of used vehicles of the make and model discussed in Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line" varies widely. The most expensive automobile in the sample in Table 10.3 "Data on Age and Value of Used Automobiles of a Specific Make and Model" has value $30,500, which is nearly half again as much as the least expensive one, which is worth$20,400. Find the proportion of the variability in value that is accounted for by the linear relationship between age and value. Solution: The proportion of the variability in value y that is accounted for by the linear relationship between it and age x is given by the coefficient of determination, r2. Since the correlation coefficient r was already computed in Note 10.19 "Example 3" as $r=−0.819$, $r2=(−0.819)2=0.671.$ About 67% of the variability in the value of this vehicle can be explained by its age. ### Example 11 Use each of the three formulas for the coefficient of determination to compute its value for the example of ages and values of vehicles. Solution: In Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line" we computed the exact values $SSxx=14 SSxy=−28.7 SSyy=87.781 β^1=−2.05$ In Note 10.24 "Example 5" in Section 10.4 "The Least Squares Regression Line" we computed the exact value $SSE=28.946$ Inserting these values into the formulas in the definition, one after the other, gives $r2=SSyy−SSESSyy=87.781−28.94687.781=0.6702475479r2=SSxy2SSxxSSyy=(−28.7)2(14)(87.781)=0.6702475479r2=β^1SSxySSyy=−2.05−28.787.781=0.6702475479$ which rounds to 0.670. The discrepancy between the value here and in the previous example is because a rounded value of r from Note 10.19 "Example 3" was used there. The actual value of r before rounding is 0.8186864772, which when squared gives the value for r2 obtained here. The coefficient of determination r2 can always be computed by squaring the correlation coefficient r if it is known. Any one of the defining formulas can also be used. Typically one would make the choice based on which quantities have already been computed. What should be avoided is trying to compute r by taking the square root of r2, if it is already known, since it is easy to make a sign error this way. To see what can go wrong, suppose $r2=0.64.$ Taking the square root of a positive number with any calculating device will always return a positive result. The square root of 0.64 is 0.8. However, the actual value of r might be the negative number −0.8. ### Key Takeaways • The coefficient of determination r2 estimates the proportion of the variability in the variable y that is explained by the linear relationship between y and the variable x. • There are several formulas for computing r2. The choice of which one to use can be based on which quantities have already been computed so far. ### Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2 "The Linear Correlation Coefficient", Section 10.4 "The Least Squares Regression Line", and Section 10.5 "Statistical Inferences About ". 1. For the sample data set of Exercise 1 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=β^1SSxy∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 2. For the sample data set of Exercise 2 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=β^1SSxy∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 3. For the sample data set of Exercise 3 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=β^1SSxy∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 4. For the sample data set of Exercise 4 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=β^1SSxy∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 5. For the sample data set of Exercise 5 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=β^1SSxy∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 6. For the sample data set of Exercise 6 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=β^1SSxy∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 7. For the sample data set of Exercise 7 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=(SSyy−SSE)∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 8. For the sample data set of Exercise 8 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=(SSyy−SSE)∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 9. For the sample data set of Exercise 9 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=(SSyy−SSE)∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. 10. For the sample data set of Exercise 9 of Section 10.2 "The Linear Correlation Coefficient" find the coefficient of determination using the formula $r2=(SSyy−SSE)∕SSyy.$ Confirm your answer by squaring r as computed in that exercise. ### Applications 1. For the data in Exercise 11 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of age and vocabulary. 2. For the data in Exercise 12 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of vehicle weight and braking distance. 3. For the data in Exercise 13 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of age and resting heart rate. In the age range of the data, does age seem to be a very important factor with regard to heart rate? 4. For the data in Exercise 14 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of wind speed and wave height. Does wind speed seem to be a very important factor with regard to wave height? 5. For the data in Exercise 15 of Section 10.2 "The Linear Correlation Coefficient" find the proportion of the variability in revenue that is explained by level of advertising. 6. For the data in Exercise 16 of Section 10.2 "The Linear Correlation Coefficient" find the proportion of the variability in adult height that is explained by the variation in length at age two. 7. For the data in Exercise 17 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of course average before the final exam and score on the final exam. 8. For the data in Exercise 18 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of acres planted and acres harvested. 9. For the data in Exercise 19 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of the amount of the medication consumed and blood concentration of the active ingredient. 10. For the data in Exercise 20 of Section 10.2 "The Linear Correlation Coefficient" compute the coefficient of determination and interpret its value in the context of tree size and age. 11. For the data in Exercise 21 of Section 10.2 "The Linear Correlation Coefficient" find the proportion of the variability in 28-day strength of concrete that is accounted for by variation in 3-day strength. 12. For the data in Exercise 22 of Section 10.2 "The Linear Correlation Coefficient" find the proportion of the variability in energy demand that is accounted for by variation in average temperature. ### Large Data Set Exercises 1. Large Data Set 1 lists the SAT scores and GPAs of 1,000 students. Compute the coefficient of determination and interpret its value in the context of SAT scores and GPAs. http://www.gone.books/sites/all/files/data1.xls 2. Large Data Set 12 lists the golf scores on one round of golf for 75 golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the coefficient of determination and interpret its value in the context of golf scores with the two kinds of golf clubs. http://www.gone.books/sites/all/files/data12.xls 3. Large Data Set 13 records the number of bidders and sales price of a particular type of antique grandfather clock at 60 auctions. Compute the coefficient of determination and interpret its value in the context of the number of bidders at an auction and the price of this type of antique grandfather clock. http://www.gone.books/sites/all/files/data13.xls 1. 0.848 2. 0.631 3. 0.5 4. 0.766 5. 0.715 1. 0.898; about 90% of the variability in vocabulary is explained by age 2. 0.503; about 50% of the variability in heart rate is explained by age. Age is a significant but not dominant factor in explaining heart rate. 3. The proportion is r2 = 0.692. 4. 0.563; about 56% of the variability in final exam scores is explained by course average before the final exam 5. 0.931; about 93% of the variability in the blood concentration of the active ingredient is explained by the amount of the medication consumed 6. The proportion is r2 = 0.984. 1. $r2=21.17%.$ 2. $r2=81.04%.$ ### Learning Objectives 1. To learn the distinction between estimation and prediction. 2. To learn the distinction between a confidence interval and a prediction interval. 3. To learn how to implement formulas for computing confidence intervals and prediction intervals. Consider the following pairs of problems, in the context of Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line", the automobile age and value example. 1. 1. Estimate the average value of all four-year-old automobiles of this make and model. 2. Construct a 95% confidence interval for the average value of all four-year-old automobiles of this make and model. 2. 1. Shylock intends to buy a four-year-old automobile of this make and model next week. Predict the value of the first such automobile that he encounters. 2. Construct a 95% confidence interval for the value of the first such automobile that he encounters. The method of solution and answer to the first question in each pair, (1a) and (2a), are the same. When we set x equal to 4 in the least squares regression equation $y^=−2.05x+32.83$ that was computed in part (c) of Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line", the number returned, $y^=−2.05(4)+32.83=24.63$ which corresponds to value $24,630, is an estimate of precisely the number sought in question (1a): the mean $E(y)$ of all y values when x = 4. Since nothing is known about the first four-year-old automobile of this make and model that Shylock will encounter, our best guess as to its value is the mean value $E(y)$ of all such automobiles, the number 24.63 or$24,630, computed in the same way. The answers to the second part of each question differ. In question (1b) we are trying to estimate a population parameter: the mean of the all the y-values in the sub-population picked out by the value x = 4, that is, the average value of all four-year-old automobiles. In question (2b), however, we are not trying to capture a fixed parameter, but the value of the random variable y in one trial of an experiment: examine the first four-year-old car Shylock encounters. In the first case we seek to construct a confidence interval in the same sense that we have done before. In the second case the situation is different, and the interval constructed has a different name, prediction interval. In the second case we are trying to “predict” where a the value of a random variable will take its value. ### $100(1−α)%$ Confidence Interval for the Mean Value of y at $x=xp$ $y^p±tα∕2 sε 1n+(xp−x-)2SSxx$ where 1. xp is a particular value of x that lies in the range of x-values in the sample data set used to construct the least squares regression line; 2. $y^p$ is the numerical value obtained when the least square regression equation is evaluated at $x=xp$; and 3. the number of degrees of freedom for $tα∕2$ is $df=n−2.$ The assumptions listed in Section 10.3 "Modelling Linear Relationships with Randomness Present" must hold. The formula for the prediction interval is identical except for the presence of the number 1 underneath the square root sign. This means that the prediction interval is always wider than the confidence interval at the same confidence level and value of x. In practice the presence of the number 1 tends to make it much wider. ### $100(1−α)%$ Prediction Interval for an Individual New Value of y at $x=xp$ $y^p±tα∕2 sε 1+1n+(xp−x-)2SSxx$ where 1. xp is a particular value of x that lies in the range of x-values in the data set used to construct the least squares regression line; 2. $y^p$ is the numerical value obtained when the least square regression equation is evaluated at $x=xp$; and 3. the number of degrees of freedom for $tα∕2$ is $df=n−2.$ The assumptions listed in Section 10.3 "Modelling Linear Relationships with Randomness Present" must hold. ### Example 12 Using the sample data of Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line", recorded in Table 10.3 "Data on Age and Value of Used Automobiles of a Specific Make and Model", construct a 95% confidence interval for the average value of all three-and-one-half-year-old automobiles of this make and model. Solution: Solving this problem is merely a matter of finding the values of $y^p$, $α$ and $tα∕2$, $sε$, $x-$, and $SSxx$ and inserting them into the confidence interval formula given just above. Most of these quantities are already known. From Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line", $SSxx=14$ and $x-=4.$ From Note 10.31 "Example 7" in Section 10.5 "Statistical Inferences About ", $sε=1.902169814.$ From the statement of the problem $xp=3.5$, the value of x of interest. The value of $y^p$ is the number given by the regression equation, which by Note 10.19 "Example 3" is $y^=−2.05x+32.83$, when $x=xp$, that is, when x = 3.5. Thus here $y^p=−2.05(3.5)+32.83=25.655.$ Lastly, confidence level 95% means that $α=1−0.95=0.05$ so $α∕2=0.025.$ Since the sample size is n = 10, there are $n−2=8$ degrees of freedom. By Figure 12.3 "Critical Values of ", $t0.025=2.306.$ Thus $y^p±tα∕2 sε 1n+(xp−x-)2SSxx=25.655±(2.306)(1.902169814)110+(3.5−4)214=25.655±4.3864035910.1178571429=25.655±1.506$ which gives the interval $(24.149,27.161).$ We are 95% confident that the average value of all three-and-one-half-year-old vehicles of this make and model is between $24,149 and$27,161. ### Example 13 Using the sample data of Note 10.19 "Example 3" in Section 10.4 "The Least Squares Regression Line", recorded in Table 10.3 "Data on Age and Value of Used Automobiles of a Specific Make and Model", construct a 95% prediction interval for the predicted value of a randomly selected three-and-one-half-year-old automobile of this make and model. Solution: The computations for this example are identical to those of the previous example, except that now there is the extra number 1 beneath the square root sign. Since we were careful to record the intermediate results of that computation, we have immediately that the 95% prediction interval is $y^p±tα∕2 sε 1+1n+(xp−x-)2SSxx=25.655±4.3864035911.1178571429=25.655±4.638$ which gives the interval $(21.017,30.293).$ We are 95% confident that the value of a randomly selected three-and-one-half-year-old vehicle of this make and model is between $21,017 and$30,293. Note what an enormous difference the presence of the extra number 1 under the square root sign made. The prediction interval is about two-and-one-half times wider than the confidence interval at the same level of confidence. ### Key Takeaways • A confidence interval is used to estimate the mean value of y in the sub-population determined by the condition that x have some specific value xp. • The prediction interval is used to predict the value that the random variable y will take when x has some specific value xp. ### Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in previous sections. 1. For the sample data set of Exercise 1 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 4. 2. Construct the 90% confidence interval for that mean value. 2. For the sample data set of Exercise 2 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 4. 2. Construct the 90% confidence interval for that mean value. 3. For the sample data set of Exercise 3 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 7. 2. Construct the 95% confidence interval for that mean value. 4. For the sample data set of Exercise 4 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 2. 2. Construct the 80% confidence interval for that mean value. 5. For the sample data set of Exercise 5 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 1. 2. Construct the 80% confidence interval for that mean value. 6. For the sample data set of Exercise 6 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 5. 2. Construct the 95% confidence interval for that mean value. 7. For the sample data set of Exercise 7 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 6. 2. Construct the 99% confidence interval for that mean value. 3. Is it valid to make the same estimates for x = 12? Explain. 8. For the sample data set of Exercise 8 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 12. 2. Construct the 80% confidence interval for that mean value. 3. Is it valid to make the same estimates for x = 0? Explain. 9. For the sample data set of Exercise 9 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 0. 2. Construct the 90% confidence interval for that mean value. 3. Is it valid to make the same estimates for $x=−1$? Explain. 10. For the sample data set of Exercise 9 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the mean value of y in the sub-population determined by the condition x = 8. 2. Construct the 95% confidence interval for that mean value. 3. Is it valid to make the same estimates for x = 0? Explain. ### Applications 1. For the data in Exercise 11 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the average number of words in the vocabulary of 18-month-old children. 2. Construct the 95% confidence interval for that mean value. 3. Is it valid to make the same estimates for two-year-olds? Explain. 2. For the data in Exercise 12 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the average braking distance of automobiles that weigh 3,250 pounds. 2. Construct the 80% confidence interval for that mean value. 3. Is it valid to make the same estimates for 5,000-pound automobiles? Explain. 3. For the data in Exercise 13 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the resting heart rate of a man who is 35 years old. 2. One of the men in the sample is 35 years old, but his resting heart rate is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the 90% confidence interval for the mean resting heart rate of all 35-year-old men. 4. For the data in Exercise 14 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the wave height when the wind speed is 13 miles per hour. 2. One of the wind speeds in the sample is 13 miles per hour, but the height of waves that day is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the 90% confidence interval for the mean wave height on days when the wind speed is 13 miles per hour. 5. For the data in Exercise 15 of Section 10.2 "The Linear Correlation Coefficient" 1. The business owner intends to spend $2,500 on advertising next year. Give an estimate of next year’s revenue based on this fact. 2. Construct the 90% prediction interval for next year’s revenue, based on the intent to spend$2,500 on advertising. 6. For the data in Exercise 16 of Section 10.2 "The Linear Correlation Coefficient" 1. A two-year-old girl is 32.3 inches long. Predict her adult height. 2. Construct the 95% prediction interval for the girl’s adult height. 7. For the data in Exercise 17 of Section 10.2 "The Linear Correlation Coefficient" 1. Lodovico has a 78.6 average in his physics class just before the final. Give a point estimate of what his final exam grade will be. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Lodovico’s final exam grade at the 90% level of confidence. 8. For the data in Exercise 18 of Section 10.2 "The Linear Correlation Coefficient" 1. This year 86.2 million acres of corn were planted. Give a point estimate of the number of acres that will be harvested this year. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the number of acres that will be harvested this year, at the 99% level of confidence. 9. For the data in Exercise 19 of Section 10.2 "The Linear Correlation Coefficient" 1. Give a point estimate for the blood concentration of the active ingredient of this medication in a man who has consumed 1.5 ounces of the medication just recently. 2. Gratiano just consumed 1.5 ounces of this medication 30 minutes ago. Construct a 95% prediction interval for the concentration of the active ingredient in his blood right now. 10. For the data in Exercise 20 of Section 10.2 "The Linear Correlation Coefficient" 1. You measure the girth of a free-standing oak tree five feet off the ground and obtain the value 127 inches. How old do you estimate the tree to be? 2. Construct a 90% prediction interval for the age of this tree. 11. For the data in Exercise 21 of Section 10.2 "The Linear Correlation Coefficient" 1. A test cylinder of concrete three days old fails at 1,750 psi. Predict what the 28-day strength of the concrete will be. 2. Construct a 99% prediction interval for the 28-day strength of this concrete. 3. Based on your answer to (b), what would be the minimum 28-day strength you could expect this concrete to exhibit? 12. For the data in Exercise 22 of Section 10.2 "The Linear Correlation Coefficient" 1. Tomorrow’s average temperature is forecast to be 53 degrees. Estimate the energy demand tomorrow. 2. Construct a 99% prediction interval for the energy demand tomorrow. 3. Based on your answer to (b), what would be the minimum demand you could expect? ### Large Data Set Exercises 1. Large Data Set 1 lists the SAT scores and GPAs of 1,000 students. http://www.gone.books/sites/all/files/data1.xls 1. Give a point estimate of the mean GPA of all students who score 1350 on the SAT. 2. Construct a 90% confidence interval for the mean GPA of all students who score 1350 on the SAT. 2. Large Data Set 12 lists the golf scores on one round of golf for 75 golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). http://www.gone.books/sites/all/files/data12.xls 1. Thurio averages 72 strokes per round with his own clubs. Give a point estimate for his score on one round if he switches to the new clubs. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Thurio’s score on one round if he switches to the new clubs, at 90% confidence. 3. Large Data Set 13 records the number of bidders and sales price of a particular type of antique grandfather clock at 60 auctions. http://www.gone.books/sites/all/files/data13.xls 1. There are seven likely bidders at the Verona auction today. Give a point estimate for the price of such a clock at today’s auction. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the likely sale price of such a clock at today’s sale, at 95% confidence. 1. 5.647, 2. $5.647±1.253$ 1. −0.188, 2. $−0.188±3.041$ 1. 1.875, 2. $1.875±1.423$ 1. 5.4, 2. $5.4±3.355$, 3. invalid (extrapolation) 1. 2.4, 2. $2.4±1.474$, 3. valid (−1 is in the range of the x-values in the data set) 1. 31.3 words, 2. $31.3±7.1$ words, 3. not valid, since two years is 24 months, hence this is extrapolation 1. 73.2 beats/min, 2. The man’s heart rate is not the predicted average for all men his age. c. $73.2±1.2$ beats/min 1. $224,562, 2.$224,562 ± \$28,699 1. 74, 2. Prediction (one person, not an average for all who have average 78.6 before the final exam), 3. $74±24$ 1. 0.066%, 2. $0.066±0.034%$ 1. 4,656 psi, 2. $4,656±321$ psi, 3. $4,656−321=4,335$ psi 1. 2.19 2. $(2.1421,2.2316)$ 1. 7771.39 2. A prediction interval. 3. $(7410.41,8132.38)$ ### Learning Objective 1. To see a complete linear correlation and regression analysis, in a practical setting, as a cohesive whole. In the preceding sections numerous concepts were introduced and illustrated, but the analysis was broken into disjoint pieces by sections. In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence. In general educators are convinced that, all other factors being equal, class attendance has a significant bearing on course performance. To investigate the relationship between attendance and performance, an education researcher selects for study a multiple section introductory statistics course at a large university. Instructors in the course agree to keep an accurate record of attendance throughout one semester. At the end of the semester 26 students are selected a random. For each student in the sample two measurements are taken: x, the number of days the student was absent, and y, the student’s score on the common final exam in the course. The data are summarized in Table 10.4 "Absence and Score Data". Table 10.4 Absence and Score Data Absences Score Absences Score x y x y 2 76 4 41 7 29 5 63 2 96 4 88 7 63 0 98 2 79 1 99 7 71 0 89 0 88 1 96 0 92 3 90 6 55 1 90 6 70 3 68 2 80 1 84 2 75 3 80 1 63 1 78 A scatter plot of the data is given in Figure 10.13 "Plot of the Absence and Exam Score Pairs". There is a downward trend in the plot which indicates that on average students with more absences tend to do worse on the final examination. Figure 10.13 Plot of the Absence and Exam Score Pairs The trend observed in Figure 10.13 "Plot of the Absence and Exam Score Pairs" as well as the fairly constant width of the apparent band of points in the plot makes it reasonable to assume a relationship between x and y of the form $y=β1x+β0+ε$ where $β1$ and $β0$ are unknown parameters and ε is a normal random variable with mean zero and unknown standard deviation σ. Note carefully that this model is being proposed for the population of all students taking this course, not just those taking it this semester, and certainly not just those in the sample. The numbers $β1$, $β0$, and σ are parameters relating to this large population. First we perform preliminary computations that will be needed later. The data are processed in Table 10.5 "Processed Absence and Score Data". Table 10.5 Processed Absence and Score Data x y x2 xy y2 x y x2 xy y2 2 76 4 152 5776 4 41 16 164 1681 7 29 49 203 841 5 63 25 315 3969 2 96 4 192 9216 4 88 16 352 7744 7 63 49 441 3969 0 98 0 0 9604 2 79 4 158 6241 1 99 1 99 9801 7 71 49 497 5041 0 89 0 0 7921 0 88 0 0 7744 1 96 1 96 9216 0 92 0 0 8464 3 90 9 270 8100 6 55 36 330 3025 1 90 1 90 8100 6 70 36 420 4900 3 68 9 204 4624 2 80 4 160 6400 1 84 1 84 7056 2 75 4 150 5625 3 80 9 240 6400 1 63 1 63 3969 1 78 1 78 6084 Adding up the numbers in each column in Table 10.5 "Processed Absence and Score Data" gives $Σx=71, Σy=2001, Σx2=329, Σxy=4758, and Σy2=161511.$ Then $SSxx=Σx2−1n(Σx)2=329−126(71)2=135.1153846SSxy=Σxy−1n(Σx)(Σy)=4758−126(71)(2001)=−706.2692308SSyy=Σy2−1n(Σy)2=161511−126(2001)2=7510.961538$ and $x-=Σxn=7126=2.730769231 and y-=Σyn=200126=76.96153846$ We begin the actual modelling by finding the least squares regression line, the line that best fits the data. Its slope and y-intercept are $β^1=SSxySSxx=−706.2692308135.1153846=−5.227156278β^0=y-−β^1x-=76.96153846−(−5.227156278)(2.730769231)=91.23569553$ Rounding these numbers to two decimal places, the least squares regression line for these data is $y^=−5.23 x+91.24.$ The goodness of fit of this line to the scatter plot, the sum of its squared errors, is $SSE=SSyy−β^1SSxy=7510.961538−(−5.227156278)(−706.2692308)=3819.181894$ This number is not particularly informative in itself, but we use it to compute the important statistic $sε=SSEn−2=3819.18189424=12.11988495$ The statistic $sε$ estimates the standard deviation σ of the normal random variable ε in the model. Its meaning is that among all students with the same number of absences, the standard deviation of their scores on the final exam is about 12.1 points. Such a large value on a 100-point exam means that the final exam scores of each sub-population of students, based on the number of absences, are highly variable. The size and sign of the slope $β^1=−5.23$ indicate that, for every class missed, students tend to score about 5.23 fewer points lower on the final exam on average. Similarly for every two classes missed students tend to score on average $2×5.23=10.46$ fewer points on the final exam, or about a letter grade worse on average. Since 0 is in the range of x-values in the data set, the y-intercept also has meaning in this problem. It is an estimate of the average grade on the final exam of all students who have perfect attendance. The predicted average of such students is $β^0=91.24.$ Before we use the regression equation further, or perform other analyses, it would be a good idea to examine the utility of the linear regression model. We can do this in two ways: 1) by computing the correlation coefficient r to see how strongly the number of absences x and the score y on the final exam are correlated, and 2) by testing the null hypothesis $H0:β1=0$ (the slope of the population regression line is zero, so x is not a good predictor of y) against the natural alternative $Ha:β1<0$ (the slope of the population regression line is negative, so final exam scores y go down as absences x go up). The correlation coefficient r is $r=SSxySSxxSSyy=−706.2692308(135.1153846)(7510.961538)=−0.7010840977$ a moderate negative correlation. Turning to the test of hypotheses, let us test at the commonly used 5% level of significance. The test is $H0:β1=0vs. Ha:β1<0 @ α=0.05$ From Figure 12.3 "Critical Values of ", with $df=26−2=24$ degrees of freedom $t0.05=1.711$, so the rejection region is $(−∞,−1.711].$ The value of the standardized test statistic is $t=β^1−B0sε∕SSxx=−5.227156278−012.11988495∕135.1153846=−5.013$ which falls in the rejection region. We reject H0 in favor of Ha. The data provide sufficient evidence, at the 5% level of significance, to conclude that $β1$ is negative, meaning that as the number of absences increases average score on the final exam decreases. As already noted, the value $β1=−5.23$ gives a point estimate of how much one additional absence is reflected in the average score on the final exam. For each additional absence the average drops by about 5.23 points. We can widen this point estimate to a confidence interval for $β1.$ At the 95% confidence level, from Figure 12.3 "Critical Values of " with $df=26−2=24$ degrees of freedom, $tα∕2=t0.025=2.064.$ The 95% confidence interval for $β1$ based on our sample data is $β^1±tα∕2sεSSxx=−5.23±2.064 12.11988495135.1153846=−5.23±2.15$ or $(−7.38,−3.08).$ We are 95% confident that, among all students who ever take this course, for each additional class missed the average score on the final exam goes down by between 3.08 and 7.38 points. If we restrict attention to the sub-population of all students who have exactly five absences, say, then using the least squares regression equation $y^=−5.23x+91.24$ we estimate that the average score on the final exam for those students is $y^=−5.23(5)+91.24=65.09$ This is also our best guess as to the score on the final exam of any particular student who is absent five times. A 95% confidence interval for the average score on the final exam for all students with five absences is $y^p±tα∕2sε1n+(xp−x-)2SSxx=65.09±(2.064)(12.11988495)126+(5−2.730769231)2135.1153846=65.09±25.015442540.0765727299=65.09±6.92$ which is the interval $(58.17,72.01).$ This confidence interval suggests that the true mean score on the final exam for all students who are absent from class exactly five times during the semester is likely to be between 58.17 and 72.01. If a particular student misses exactly five classes during the semester, his score on the final exam is predicted with 95% confidence to be in the interval $y^p±tα∕2sε1+1n+(xp−x-)2SSxx=65.09±25.015442541.0765727299=65.09±25.96$ which is the interval $(39.13,91.05).$ This prediction interval suggests that this individual student’s final exam score is likely to be between 39.13 and 91.05. Whereas the 95% confidence interval for the average score of all student with five absences gave real information, this interval is so wide that it says practically nothing about what the individual student’s final exam score might be. This is an example of the dramatic effect that the presence of the extra summand 1 under the square sign in the prediction interval can have. Finally, the proportion of the variability in the scores of students on the final exam that is explained by the linear relationship between that score and the number of absences is estimated by the coefficient of determination, r2. Since we have already computed r above we easily find that $r2=(−0.7010840977)2=0.491518912$ or about 49%. Thus although there is a significant correlation between attendance and performance on the final exam, and we can estimate with fair accuracy the average score of students who miss a certain number of classes, nevertheless less than half the total variation of the exam scores in the sample is explained by the number of absences. This should not come as a surprise, since there are many factors besides attendance that bear on student performance on exams. ### Key Takeaway • It is a good idea to attend class. ### Exercises The exercises in this section are unrelated to those in previous sections. 1. The data give the amount x of silicofluoride in the water (mg/L) and the amount y of lead in the bloodstream (μg/dL) of ten children in various communities with and without municipal water. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find $SSE$, $sε$, and r, and so on). In the hypothesis test use as the alternative hypothesis $β1>0$, and test at the 5% level of significance. Use confidence level 95% for the confidence interval for $β1.$ Construct 95% confidence and predictions intervals at $xp=2$ at the end. $x0.00.01.11.41.6y0.30.14.73.25.1$ $x1.72.02.02.22.2y7.05.06.18.69.5$ 2. The table gives the weight x (thousands of pounds) and available heat energy y (million BTU) of a standard cord of various species of wood typically used for heating. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find $SSE$, $sε$, and r, and so on). In the hypothesis test use as the alternative hypothesis $β1>0$, and test at the 5% level of significance. Use confidence level 95% for the confidence interval for $β1.$ Construct 95% confidence and predictions intervals at $xp=5$ at the end. $x3.373.504.294.004.64y23.617.520.121.628.1$ $x4.994.945.483.264.16y25.327.030.718.920.7$ ### Large Data Set Exercises 1. Large Data Sets 3 and 3A list the shoe sizes and heights of 174 customers entering a shoe store. The gender of the customer is not indicated in Large Data Set 3. However, men’s and women’s shoes are not measured on the same scale; for example, a size 8 shoe for men is not the same size as a size 8 shoe for women. Thus it would not be meaningful to apply regression analysis to Large Data Set 3. Nevertheless, compute the scatter diagrams, with shoe size as the independent variable (x) and height as the dependent variable (y), for (i) just the data on men, (ii) just the data on women, and (iii) the full mixed data set with both men and women. Does the third, invalid scatter diagram look markedly different from the other two? http://www.gone.books/sites/all/files/data3.xls http://www.gone.books/sites/all/files/data3A.xls 2. Separate out from Large Data Set 3A just the data on men and do a complete analysis, with shoe size as the independent variable (x) and height as the dependent variable (y). Use $α=0.05$ and $xp=10$ whenever appropriate. http://www.gone.books/sites/all/files/data3A.xls 3. Separate out from Large Data Set 3A just the data on women and do a complete analysis, with shoe size as the independent variable (x) and height as the dependent variable (y). Use $α=0.05$ and $xp=10$ whenever appropriate. http://www.gone.books/sites/all/files/data3A.xls 1. $Σx=14.2$, $Σy=49.6$, $Σxy=91.73$, $Σx2=26.3$, $Σy2=333.86.$ $SSxx=6.136$, $SSxy=21.298$, $SSyy=87.844.$ $x-=1.42$, $y-=4.96.$ $β^1=3.47$, $β^0=0.03.$ $SSE=13.92.$ $sε=1.32.$ r = 0.9174, r2 = 0.8416. $df=8$, T = 6.518. The 95% confidence interval for $β1$ is: $(2.24,4.70).$ At $xp=2$, the 95% confidence interval for $E(y)$ is $(5.77,8.17).$ At $xp=2$, the 95% prediction interval for y is $(3.73,10.21).$ 1. The positively correlated trend seems less profound than that in each of the previous plots. 2. The regression line: $y^=3.3426x+138.7692.$ Coefficient of Correlation: r = 0.9431. Coefficient of Determination: r2 = 0.8894. $SSE=283.2473.$ $se=1.9305.$ A 95% confidence interval for $β1$: $(3.0733,3.6120).$ Test Statistic for $H0:β1=0$: T = 24.7209. At $xp=10$, $y^=172.1956$; a 95% confidence interval for the mean value of y is: $(171.5577,172.8335)$; and a 95% prediction interval for an individual value of y is: $(168.2974,176.0938).$ $SSxx=Σx2−1n(Σx)2 SSxy=Σxy−1n(Σx)(Σy) SSyy=Σy2−1n(Σy)2$ Correlation coefficient: $r=SSxySSxx·SSyy$ Least squares regression equation (equation of the least squares regression line): $y^=β^1x+β^0 where β^1=SSxySSxx and β^0=y-−β^1x-$ Sum of the squared errors for the least squares regression line: $SSE=SSyy−β^1SSxy.$ Sample standard deviation of errors: $sε=SSEn−2$ $100(1−α)%$ confidence interval for $β1$: $β^1±tα∕2 sεSSxx (df=n−2)$ Standardized test statistic for hypothesis tests concerning $β1$: $T=β^1−B0sε∕SSxx (df=n−2)$ Coefficient of determination: $r2=SSyy−SSESSyy=SSxy2SSxxSSyy=β^1SSxySSyy$ $100(1−α)%$ confidence interval for the mean value of y at $x=xp$: $y^p±tα∕2 sε 1n+(xp−x-)2SSxx (df=n−2)$ $100(1−α)%$ prediction interval for an individual new value of y at $x=xp$: $y^p±tα∕2 sε 1+1n+(xp−x-)2SSxx (df=n−2)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 581, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084429502487183, "perplexity": 403.56517749740084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00618.warc.gz"}
https://brilliant.org/discussions/thread/have-you-tried/
× # Have you tried What will be the moment of inertia of a ring of mass m and radius r in xy plane about an axis making angle $$\theta$$ with the z-axis given that moment of inertia about z axis is $$m r^2$$. Note by Kushal Patankar 1 year, 11 months ago Sort by: i am getting 2m(r^2)(ln(R)/(pi) - m((cos(theta))^2)(R^2)/(2pi) i am not sure too..............are getting the same??? · 1 year, 11 months ago This is wrong, putting $$\theta = \pi/2$$ does not yield $$\frac {mR^2} {2}$$ · 1 year, 11 months ago You can check by putting θ=0 · 1 year, 11 months ago what are you getting ? · 1 year, 11 months ago I have a guess $I= mr^2 \cos^2 \theta +\frac{mr^2}{2} \sin ^2 \theta$ · 1 year, 11 months ago did you solved it ... ? · 1 year, 11 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649230599403381, "perplexity": 2634.28183046726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00609-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.open.edu/openlearn/science-maths-technology/engineering-and-technology/technology/structural-devices/content-section-6.5
Structural devices This free course is available to start right now. Review the full course description and key learning outcomes and create an account and enrol if you want a free statement of participation. Free course # 6.5 Q-value The rate at which the mass–spring system loses energy to its surroundings is referred to as the Q-value for the oscillator. The Q-value is defined as: ΔE/E is the fractional energy loss per cycle of the oscillation. This can also be expressed in terms of angular frequency as: or frequency as: where Δω and Δf are the width of the peak at its halfway point. This energy loss is referred to as damping and is due to the frictional losses I have already mentioned. A large value of Q equates to a very small energy loss. Q stands for quality; in systems where oscillations are desired, such as the design of a bell, the larger the value of Q the longer the bell will ring. Figure 30 shows two typical plots of amplitude vs angular frequency (A0 vs ω) for two driven resonators, one with a high Q and the other with a lower Q-value. Figure 30 Plots of amplitude vs frequency for two resonators. Plot (a) is for a high Q-value, plot (b) is for a lower Q You can see that the amplitude of plot (a) is far greater than that of (b), but also that it falls away much more rapidly either side of the peak. This sharpness is characteristic of an undamped oscillator with a high Q and it exactly mirrors the amplitude vs frequency behaviour shown in Figure 28. ## SAQ 15 Should I look for high or low Q-values in the following cases? Why? • (a) A clock pendulum; • (b) car suspension; • (c) a bell. • (a) We want the clock to run for as long as possible and at an extremely precise frequency; so we look for the highest possible Q-value. • (b) Car suspension is damped by shock absorbers; these reduce resonance effects which might otherwise prove disastrous. A very low Q-value is appropriate here. • (c) As I have already mentioned we need a high Q-value in order to ensure that the bell will ring for some time. However, a high Q-value would lead to a quiet bell and so a compromise has to be sought between volume and longevity. T356_1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280919790267944, "perplexity": 964.0634421389827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256215.47/warc/CC-MAIN-20190521022141-20190521044141-00188.warc.gz"}
http://dlinares.org/mlepsychotrands.html
Let’s suppose that in each trial we present one stimulus to the left or right of the fixation point and the observer needs to decide in which location the stimulus was presented. We use five contrast levels and for each level we present the stimulus 100 times. Here some hypothetical data library('ggplot2') n<-100 x<-c(0.05,0.10,0.15,0.20,0.25) # contrast k<-c(59,56,69,90,96) # number of times that the observer reports that can see the stimulus y<-k/n dat<-data.frame(x,k,y) p<-ggplot()+ geom_point(data=dat,aes(x=x,y=y)) p Instead of chosing a shape for the psychometric function, we choose a shape for the transduction of contrast to perceived contrast. From the transducer function and assuming constant noise (additive), we will recover the shape of the psychometric function. First, let’s choose a general expression for the transduction of contrast $$\mu$$ $\mu(x)=\frac{p_1 x^{p_3}}{p_2+x^{p_3}}$ mu<-function(x,p){ f<-function(x) { #naka-rushton shape num<-p[1]*x^p[3]; den<-x^p[3]+p[2] return(num/den) } sapply(x,function(x) f(x)) } Let’s plot $$\mu$$ for some arbitrary parameters as examples xSeq<-seq(0,1,.01) yTransdEx1<-mu(xSeq,c(1,1,1)) example1<-data.frame(xSeq,yTransdEx1) qplot(xSeq,yTransdEx1,example1,geom='line') ## Warning: Ignoring unknown parameters: NA yTransdEx2<-mu(xSeq,c(1,1,2)) example2<-data.frame(xSeq,yTransdEx2) qplot(xSeq,yTransdEx2,example2,geom='line') ## Warning: Ignoring unknown parameters: NA We assume that the perceived contrast elicited by the stimulus containing zero contrast is described by a random variable $$R_p$$ distributed normally with mean $$\mu(0)$$ and variance $$\sigma^2$$ and that the perceived contrast response elicited by the stimulus containing non-zero contrast is described by a random variable $$R_v$$ distributed normally with mean $$\mu(x)$$ and the same variance $$\sigma^2$$. We construct the random variable $$D=R_v-R_p$$, which is distributed normally with mean $$\mu(x)-\mu(0)$$ and variance $$2\sigma^2$$. The observer will choose the non-zero contrast stimulus when $$D>0$$, which is $$1-F(0)$$ where $$F$$ is the cumulative distribution function. That is, $P(D>0)=1 - \int_{-\infty}^{0}N(D;\mu(x)-\mu(0),2\sigma^2) dD$ where N is the normal distribution. Given that the D is just a function of the contrast $$x$$, the above expression is just an expression of the psychometric function. psychoFromTransd0<-function(x,p) { # let's allow p to contain the parameters of the transducer and the noise pNoise<-tail(p,1) sapply(mu(x,pTransd)-mu(0,pTransd), function(z) 1 - pnorm(0,mean=z,sd=sqrt(2)*pNoise)) } Let’s plot two psychometric functions for some arbitrary parameters as examples ySeqEx1<-psychoFromTransd0(xSeq,c(1,1,1,.1)) examplePsycho1<-data.frame(xSeq,yTransdEx1) qplot(xSeq,ySeqEx1,examplePsycho1,geom='line') ## Warning: Ignoring unknown parameters: NA ySeqEx2<-psychoFromTransd0(xSeq,c(1,1,2,.3)) examplePsycho2<-data.frame(xSeq,yTransdEx2) qplot(xSeq,ySeqEx2,examplePsycho2,geom='line') ## Warning: Ignoring unknown parameters: NA Let’s find the best parameters using maximum likelihood estimation negLogL<-function(p,d,fun) { phi<-fun(d$x,p) -sum( d$k*log(phi)+(n-d$k)*log(1-phi) ) } MLEparameters<-optim(c(1,1,1,.1), negLogL,d=dat,fun=psychoFromTransd0)$par MLEparameters ## [1] 2.13567206 0.05649665 2.74941451 0.23028307 and plot the psychometric function ySeq<-psychoFromTransd0(xSeq,MLEparameters) curve<-data.frame(xSeq,ySeq) p<-p+geom_line(data=curve,aes(x=xSeq,ySeq)) p and the estimated transducer yTransd<-mu(xSeq,MLEparameters) qplot(xSeq,yTransd,data.frame(xSeq,yTransd),geom='line') ## Warning: Ignoring unknown parameters: NA We might have used a simpler transduction of contrast, such as a linear transduction. In this case, it could be easily demonstrated that the shape of the psychometric function is a cumulative normal (Kingdom and Prins, 2009). ## Fitting several psychometric functions for discrimination Typically, we fit psychometric functions that have only two parameters. Here, instead we used four (the three parameters of the transducer and the noise parameter). Maybe too many parameters to fit a psychometric function. So, this approach to fit psychometric functions, maybe makes more sense when we don’t just measure detection, but discrimination of contrast for several contrasts (pedestals). Let’s suppose that in each trial we present one stimulus to the left and one to the right of the fixation point and the observer needs to decide which stimulus have higher contrast. We fix the contrast of one of them (pedestal) and change the contrast of the other one (variable) in each trial. For the variable stimulus, we use five contrast levels and for each level we present the stimulus 100 times. Here some hypothetical data for 4 pedestal contrast library('ggplot2') n<-100 pedestal<-c(0,.25,.5,.75) x1<-pedestal[1]+c(0.05,0.10,0.15,0.20,0.25) # contrast of the variable k1<-c(59,62,70,82,90) # number of times that the observer reports that the variable has higher contrast y1<-k1/n dat1<-data.frame(x=x1,k=k1,y=y1,pedestal=pedestal[1]) x2<-pedestal[2]+c(0.05,0.10,0.15,0.20,0.25) k2<-c(58,80,90,95,97) y2<-k2/n dat2<-data.frame(x=x2,k=k2,y=y2,pedestal=pedestal[2]) x3<-pedestal[3]+c(0.05,0.10,0.15,0.20,0.25) k3<-c(50,62,70,79,86) y3<-k3/n dat3<-data.frame(x=x3,k=k3,y=y3,pedestal=pedestal[3]) x4<-pedestal[4]+c(0.05,0.10,0.15,0.20,0.25) k4<-c(50,62,65,67,75) y4<-k4/n dat4<-data.frame(x=x4,k=k4,y=y4,pedestal=pedestal[4]) datD<-rbind(dat1,dat2,dat3,dat4) pD<-ggplot()+ geom_point(data=datD,aes(x=x,y=y,color=factor(pedestal))) pD We rewrite the function that builds the psychometric function from the transducer to replace the 0 contrast for a general pedestal psychoFromTransd<-function(x,pedestal,p) { pNoise<-tail(p,1) sapply(mu(x,pTransd)-mu(pedestal,pTransd), function(z) 1- pnorm(0,mean=z,sd=sqrt(2)*pNoise)) } and change the likelihood function to sum the log likelihood for each pedestal. library('plyr') negLogLD<-function(p,d,fun) { negLogForEachPedestal<-ddply(d,.(pedestal),function(d2){ phi<-fun(d2$x,unique(d2$pedestal),p) negLog<- -sum( d2$k*log(phi)+(n-d2$k)*log(1-phi)) data.frame(negLog) }) sum(negLogForEachPedestal$negLog) } MLEparametersD<-optim(c(1,1,1,.1), negLogLD,d=datD,fun=psychoFromTransd)$par MLEparametersD ## [1] 1.1557587 0.1952639 2.0045113 0.1479516 These are the psychometric functions (we needed just 4 parameters to fit 4 psychometric functions) curves<-ddply(data.frame(pedestal),.(pedestal),function(d){ xSeq<-seq(d$pedestal,d$pedestal+.25,by=.01) ySeq<-psychoFromTransd(xSeq,d\$pedestal,MLEparametersD) data.frame(xSeq,ySeq) }) pD<-pD+geom_line(data=curves,aes(x=xSeq,ySeq,color=factor(pedestal))) pD and the estimated transducer. yTransdD<-mu(xSeq,MLEparametersD) qplot(xSeq,yTransdD,data.frame(xSeq,yTransdD),geom='line') ## Warning: Ignoring unknown parameters: NA ## References García-Pérez, M. A., & Alcalá-Quintana, R. (2007). The transducer model for contrast detection and discrimination: formal relations, implications, and an empirical test. Spatial Vision, 20(1-2), 5–43. Linares, D., & Nishida, S. (2013). A synchronous surround increases the motion strength gain of motion. Journal of Vision, 13(13), 12–12. Prins, N., & Kingdom, F. A. A. (2010). Psychophysics: a practical introduction. London: Academic Press.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359456062316895, "perplexity": 1577.5959632032047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496080.57/warc/CC-MAIN-20190220190527-20190220212527-00001.warc.gz"}
http://mathoverflow.net/feeds/question/20968
rules for operator commutativity? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T23:54:48Z http://mathoverflow.net/feeds/question/20968 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/20968/rules-for-operator-commutativity rules for operator commutativity? crippledlambda 2010-04-10T23:44:18Z 2010-04-12T17:27:51Z <p>Hi, my apologies for a rather non-specific question. I wonder if there is a general set of conditions under which operators are commutative in functional analysis. Most that I've found is that "operators are, in general, not commutative". Is there any reference someone could point me to for some kind of review or special cases in which commutativity is established (or forbidden)? Thanks!</p> http://mathoverflow.net/questions/20968/rules-for-operator-commutativity/20994#20994 Answer by Soheil Malekzadeh for rules for operator commutativity? Soheil Malekzadeh 2010-04-11T09:39:59Z 2010-04-11T09:39:59Z <p>Hi, I suppose this book will be useful for you:</p> <p>Banach Algebra Techniques in Operator Theory (R. Douglas)</p> http://mathoverflow.net/questions/20968/rules-for-operator-commutativity/21007#21007 Answer by Pete L. Clark for rules for operator commutativity? Pete L. Clark 2010-04-11T13:38:17Z 2010-04-12T17:09:22Z <p>One obvious but important observation is that, for operators on a $n$-dimensional vector space over a field, if $1 &lt; n &lt; \infty$, we have $AB \neq BA$ <em>generically</em>. In other words, consider the commutativity locus $\mathcal{C}_n$ of all pairs of $n \times n$ matrices $A,B$ such that $AB = BA$ as a subset of $\mathbb{A}^{n^2}$. This is clearly a Zariski closed set -- i.e., defined by the vanshing of polynomial equations. It is also proper: take e.g. <code>$A = \left[ \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 1 \end{array} \right] \oplus 0_{n-2}$</code> and <code>$B = \left[ \begin{array}{cc} 0 &amp; 1 \\ 0 &amp; 1 \end{array} \right] \oplus 0_{n-2}$</code>. Since $\mathbb{A}^{n^2}$ is an irreducible variety, $\mathcal{C}_N$ therefore has dimension less than $N^2$. This implies that over a field like $\mathbb{R}$ or $\mathbb{C}$ where such things make sense, $\mathcal{C}_N$ has measure zero, thus giving a precise meaning to the idea that two matrices, taken at random, will not commute.</p> <p>One could ask for more information about the subvariety $\mathcal{C}_N$: what is its dimension? is it irreducible? and so forth. (Surely someone here knows the answers.)</p> <p>I would guess it is also true that for a Banach space $E$ (over any locally compact, nondiscrete field $k$, say) of dimension $> 1$, the locus $\mathcal{C}_E$ of all commuting pairs of bounded linear operators is <strong>meager</strong> (in the sense of Baire category) in the space $B(E,E) \times B(E,E)$ of all pairs of bounded linear operators on $E$. </p> <p>Kevin Buzzard has enunciated a principle that without further constraints, the optimal answer to a question "What is a necessary and sufficient condition for $X$ to hold?" is simply "X". This seems quite applicable here: I don't think you'll find a necessary and sufficient condition for two linear operators to commute which is nearly as simple and transparent as the beautiful identity $AB = BA$.</p> <p>Still, you could ask for useful sufficient conditions. Diagonalizable operators with the same eigenspaces, as mentioned by Jonas Meyer above, is one. Another is that if $A$ and $B$ are both polynomials in the same operator $C$: this shows up for instance in the Jordan decomposition. </p> http://mathoverflow.net/questions/20968/rules-for-operator-commutativity/21129#21129 Answer by Gerald Edgar for rules for operator commutativity? Gerald Edgar 2010-04-12T17:27:51Z 2010-04-12T17:27:51Z <p>Let $P,Q$ be operators on complex Hilbert space. If there is an operator $T$ and polynomials $p,q$ so that $P = p(T)$ and $Q = q(T)$, then $P,Q$ commute. </p> <p>More generally, in a setting where the functional calculus works, if there are any two functions $p,q$ so that $P = p(T)$ and $Q = q(T)$, then $P,Q$ commute. For example, if $T$ is Hermitian (or more generally, normal), and $p,q$ are just measurable functions on the complex plane, then $p(T)$ and $q(T)$ are defined and commute. </p> <p>As an aside, you have to decide whether you want BOUNDED operators or not, and proceed accordingly.</p>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968790590763092, "perplexity": 403.3466767087687}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382989/warc/CC-MAIN-20130516092622-00039-ip-10-60-113-184.ec2.internal.warc.gz"}
http://opentradingsystem.com/quantNotes/Preservation_of_closeness_under_linear_transformation_.html
I. Basic math. II. Pricing and Hedging. III. Explicit techniques. IV. Data Analysis. V. Implementation tools. 1 Finite differences. 2 Gauss-Hermite Integration. 3 Asymptotic expansions. 4 Monte-Carlo. 5 Convex Analysis. A. Basic concepts of convex analysis. B. Caratheodory's theorem. C. Relative interior. D. Recession cone. E. Intersection of nested convex sets. F. Preservation of closeness under linear transformation. G. Weierstrass Theorem. H. Local minima of convex function. I. Projection on convex set. J. Existence of solution of convex optimization problem. K. Partial minimization of convex functions. L. Hyperplanes and separation. M. Nonvertical separation. N. Minimal common and maximal crossing points. O. Minimax theory. Q. Polar cones. R. Polyhedral cones. S. Extreme points. T. Directional derivative and subdifferential. U. Feasible direction cone, tangent cone and normal cone. V. Optimality conditions. W. Lagrange multipliers for equality constraints. X. Fritz John optimality conditions. Y. Pseudonormality. Z. Lagrangian duality. [. Conjugate duality. VI. Basic Math II. VII. Implementation tools II. VIII. Bibliography Notation. Index. Contents. ## Preservation of closeness under linear transformation. he set is a closed convex set. The projection on the -axis is a linear transformation. The image of under such transformation is open. Proposition (Preservation of closeness result). Let be a nonempty subset of and let be an matrix. 1. If then the set is closed. 2. Let be a nonempty subset of given by linear constraints If then the set is closed. 3. Let is given by the quadratic constraints where the are positive semidefinite matrices. Then the set is closed. Proof (1). Let , where the is the ball around of radius . The sets are nested if . It is suffice to prove that is not empty for any sequence We have Therefore, by the proposition ( Recession cone of inverse image ), Consequently, in the context of the proposition ( Principal intersection result ) for , Since, generally to accomplish the condition of the ( Principal intersection result ) it is enough to have as required by the theorem. Proof (2). Let . We introduce the sets for and aim to prove that the intersection is not empty. We have By the propositions ( Recession cone of inverse image ) and ( Recession cone of intersection ) Consequently, in the context of the proposition ( Principal intersection result ) for , Since, generally to accomplish the condition of the ( Principal intersection result ) it is enough to have Proof (3). Let . We introduce the sets for and aim to prove that the intersection is not empty. We have We now apply the proposition ( Quadratic intersection result ) to conclude the proof. Notation. Index. Contents.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801372289657593, "perplexity": 1769.6555069378446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00359.warc.gz"}
https://math.stackexchange.com/questions/597320/simplify-cube-root-of-a-fraction
# simplify cube root of a fraction I asking this question here to just check the work and that I have simplified it correctly. Given the formula $\Large \sqrt[3]{\frac{12m^4n^8}{5p^4}}$ We need to rationalize the denominator by multiplying by $\Large \sqrt[3]{\frac{25p^4}{25p^4}}$ Doing so would leave $\Large \frac{\sqrt[3]{300m^4n^8p^4}}{\sqrt[3]{125p^8}}$ which would further simplify to $\Large \frac{n^2\sqrt[3]{300m^4p^4}}{5p^2}$ At this point the formula would simplified as far as it can be, would this be correct? • Not quite, You can pull out part of $m^3\cdot m$ and $p^3\cdot p$, then do a little cancelling. Additionally, you seem to think that $\sqrt[3]{x^8}=x^2$, which isn't right. – Ian Coley Dec 7 '13 at 20:46 • So it would get changed to $\sqrt[3]{\frac{25p^2}{25p^2}}$ so that $\sqrt[3]{\frac{300m^4n^8p^2}{125p^6}}$ further to $\frac{mn^2\sqrt[3]{300mn^2p^2}}{5p^2}$ – James J Dec 7 '13 at 21:01 No, if you multiply this with $\sqrt[3]{\frac{25p^\color{red}2}{25p^\color{red}2}}$, you get $$\frac{\sqrt[3]{300m^4n^8p^2}}{\sqrt[3]{125p^6}}=\frac{\sqrt[3]{300m^4n^8p^2}}{5p^2}.$$ Given the formula $\sqrt[3]{\frac{12m^4n^8}{5p^4}}$ anohter thing, reasonable to me, you can get is $\frac{mn^2}{p} \sqrt[3]{\frac{12mn^2}{5p}}$. Use $c=\frac{mn^2}{p}$ to compactify this to $c \sqrt[3]{\frac{12}{5}c}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184762239456177, "perplexity": 386.5243035412939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00053.warc.gz"}
http://math.stackexchange.com/questions/486429/zeroes-of-s-sum-limits-n-2-infty-frac-1n1ns-ln-n
# Zeroes of $s+\sum\limits_{n=2}^\infty \frac{(-1)^{n+1}}{n^s\ln n}$? Where are the solutions of the equations $$s+\sum\limits_{n=2}^\infty \dfrac{1}{n^s\ln n}=0\quad \text{and}\quad s+\sum\limits_{n=2}^\infty \dfrac{(-1)^{n+1}}{n^s\ln n}=0 ?$$ Since the functions on the LHS are the integral of the zeta function and the alternating zeta function, it might make a difference if one assumes RH or not. I am considering zeroes in the entire complex plane , hence after analytic continuation. Although I am mainly interested in the zeross with $\operatorname{Re} s>-1$. Is it true that all non real zeroes are in the critical strip $0<\operatorname{Re} s<1$ apart from at most a finite amount ? I wondered if integral representations are insightful. It is clear that for $\operatorname{Re} s>\alpha$ there is no solution when the positive real $\alpha$ is sufficiently large. I assume these integrals of these zeta functions cannot be expressed as an infinite product over primes, is that true ? - Please do not use titles consisting only of math expressions; these are discouraged for technical reasons -- see meta. –  Lord_Farin Sep 7 '13 at 11:26 Roots $\approx$ zeroes $\ne$ solutions. –  Did Sep 7 '13 at 11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556384682655334, "perplexity": 372.8452837514822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122691893.34/warc/CC-MAIN-20150124180451-00151-ip-10-180-212-252.ec2.internal.warc.gz"}
https://theoryapp.com/combinatorial-design-and-pseudorandom-generator/
# Combinatorial Design and Pseudorandom Generator For a set of size $$l$$, choose $$m$$ subsets each of size $$n$$ such that each pair of subsets has intersection size at most $$d$$. This gives a combinatorial design with parameters $$(m,l,n,d)$$. That is, we have $$I_1,\ldots, I_m \subseteq [l]$$ such that $$|I_i| = n$$ for all $$i$$ and $$|I_i\cap I_j| \leq d$$ for all distinct $$i,j$$. Given an $$(m,l,n,d)$$ combinatorial design and a function $$f\colon \{0,1\}^n \to \{0,1\}$$, define a mapping $$G\colon \{0,1\}^l \to \{0,1\}^m$$ such that $$G(x_1,\ldots, x_l) = (y_1,\ldots, y_m)$$ where $$y_i = f(x_{I_i})$$. The mapping $$G$$ stretches $$l$$ bits into $$m$$ bits; it can be viewed as a pseudorandom generator. ### Existence and Construction of Combinatorial Design We first use the probabilistic method to show that, for a good parameter setting, there exists a combinatorial design satisfying the “small intersection property”. We independently generate $$m$$ random subsets. When generating each subset, we independently choose each element in $$[l]$$ with probability $$2n/l$$. For each random subset $$I$$, by Chernoff bounds, its size is at least $$n$$ with high probability. ${\bf E}[|I|] = \frac{2n}{l} \cdot l = 2n,$ ${\bf Pr}[|I| < n ] < (\frac{2}{e})^n.$ For each pair of random subsets $I, J$, by Chernoff bounds, its intersection is at most $d = 8n^2/l$ with high probability. ${\bf E}[|I\cap J|] = (\frac{2n}{l})^2 \cdot l = \frac{4n^2}{l},$ ${\bf Pr}[|I| > d = \frac{8n^2}{l} ] < (\frac{e}{4})^{4n^2/l} =(\frac{e}{4})^{d/2}.$ Let the number of subsets $m = 2^{d/8}$. Then by the union bound (on the $m$ subsets and about $m \choose 2$ pairs), the probability of the following is less than 1: • there exists a subset of size less than $$n$$, or • there exists a pair of subsets of intersection size less than $$n$$. Removing elements from the subsets will not violate the small intersection property. The above shows there exists a combinatorial design (with a good choice of parameters). Instead of using the random strategy, we can use greedy search to generate subsets one by one, such that each subset has small intersections with the previously generated subsets. This greedy search runs in time $$2^{O(l)}$$. ### From Hardness to Randomness With an $$(m,l,n,d)$$ combinatorial design and a function $$f\colon \{0,1\}^n \to \{0,1\}$$, define a mapping $$G\colon \{0,1\}^l \to \{0,1\}^m$$ such that $$G(x_1,\ldots, x_l) = (y_1,\ldots, y_m)$$ where $$y_i = f(x_{I_i})$$. This mapping $$G$$ stretches $$l$$ bits into $$m$$ bits. If we choose $$f$$ to be a “hard-to-compute” function, then $$G$$ is a good pseudorandom generator. For a circuit size $$S$$ and $$\delta \in [0,1]$$, say a function $$f\colon \{0,1\}^n \to \{0,1\}$$ is $$(S,\delta)$$-hard if, for any circuit of size at most $$S$$, ${\bf Pr}[ C(x) = f(x) ] \leq 1-\delta,$ where $$x$$ is chosen uniformly from $$\{0,1\}^n$$. This defines the average-case hardness, which means a hard function is hard to be approximated by any circuit with the size bound. Note that $$f$$ can always be approximated with $$\delta \leq 1/2$$ by a constant circuit either outputting 0 or outputting 1. A mapping $$G\colon \{0,1\}^l \to \{0,1\}^m$$ is an $$(S, \epsilon)$$-pseudorandom generator if, for any circuit of size at most $$S$$, ${\bf Pr}[ C(x) = 1] – {\bf Pr}[ C(G(y)) = 1 ] \leq \epsilon,$ where $$x$$ is uniform in $$\{0,1\}^m$$ and $$y$$ is uniform in $$\{0,1\}^l$$. Note that, here $$S$$ and $$\epsilon$$ are usually parameters of $$m$$. Theorem. (Nisan-Widgerson Generator) Given an $$(m,l,n,d)$$ combinatorial design and a function $$f\colon \{0,1\}^n \to \{0,1\}$$, let $$G\colon \{0,1\}^l \to \{0,1\}^m$$ be such that $$G(x_1,\ldots, x_l) = (y_1,\ldots, y_m)$$ where $$y_i = f(x_{I_i})$$. If $$f$$ is $$(S, \frac{1}{2}-\frac{\epsilon}{m})$$-hard, then $$G$$ is a $$(S’, \epsilon)$$-pseudorandom generator for $$S’ = S – m 2^d$$. The proof of this theorem needs the following lemma. Lemma. (Pseudorandomness = Unpredictability) For $$G\colon \{0,1\}^l \to \{0,1\}^m$$, define a distribution $$Y = G(X)$$ where $$X$$ is uniform in $$\{0,1\}^l$$. If, for any circuit $$C$$ of size at most $$S$$ and any $$i\in [m]$$ ${\bf Pr}[ C (y_1,\ldots, y_{i-1}) = y_i ] \leq \frac{1}{2} + \frac{\epsilon}{m},$ then $$G$$ is a $$(S, \epsilon)$$-pseudorandom generator. Proof. The proof is by contradiction. Suppose that $$G$$ is not a $$(S’, \epsilon)$$-pseudorandom generator. Then by the lemma above, we get a predict circuit. That is, there is a circuit $$C$$ of size $$S’$$ such that for some $$i\in [m]$$, ${\bf Pr}[ C (y_1,\ldots, y_{i-1}) = y_i ] \geq \frac{1}{2} + \frac{\epsilon}{m},$ where each $$y_j = f(x_{I_j})$$. Now we use $$C$$ to construct a circuit that approximates the function $$f$$. The above probability distribution is uniform over $$\{0,1\}^l$$. By the average argument, we can fix a partial assignment $$a$$ for the bits indexed by $$[l]\setminus I_{i}$$ and uniformly choose $$x_{I_i}$$ such that ${\bf Pr}[ C (y_1,\ldots, y_{i-1}) = y_i ] \geq \frac{1}{2} + \frac{\epsilon}{m},$ where each $$y_j = f(a_{I_j\setminus I_i}, x_{I_i\setminus I_j})$$. Here $$y_i = f(x_{I_i})$$ still depends on all $$x_{I_i}$$, but each $$y_j$$ depends on at most $$d$$ bits from $$x_{I_i}$$ by the definition of the combinatorial design. Each restriction of $$f$$ by the partial assignment $$a$$ depends on at most $$d$$ inputs; then it can be computed with a circuit of size $$2^d$$. Then by composing the circuit $$C$$ of size $$S’$$ with at most $$m$$ circuits each of size $$2^d$$, we get a circuit $$D$$ of size $$S = S’ + m \cdot 2^d$$ which approximates $$f$$ well, that is, ${\bf Pr}[ D (x_{I_i} = f(x_{I_i}) ] \geq \frac{1}{2} + \frac{\epsilon}{m}.$ This contradicts the hardness assumption of $$f$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9733811616897583, "perplexity": 169.31910086090159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00584.warc.gz"}
https://planetmath.org/compactoperator
# compact operator Let $X$ and $Y$ be two Banach spaces. A (completely continuous operator) is a linear operator $T\colon X\to Y$ that maps the unit ball in $X$ to a set in $Y$ with compact closure. It can be shown that a compact operator is necessarily a bounded operator. The set of all compact operators on $X$, commonly denoted by $\mathbb{K}(X)$, is a closed two-sided ideal of the set of all bounded operators on $X$, $\mathbb{B}(X)$. Any bounded operator which is the norm limit of a sequence of finite rank operators is compact. In the case of Hilbert spaces, the converse is also true. That is, any compact operator on a Hilbert space is a norm limit of finite rank operators. ###### Example 1 (Integral operators) Let $k(x,y)$, with $x,y\in[0,1]$, be a continuous function. The operator defined by $(T\psi)(x)=\int_{0}^{1}k(x,y)\psi(y)\,\mathrm{d}y,\qquad\psi\in C([0,1])$ is compact. Title compact operator CompactOperator 2013-03-22 14:26:59 2013-03-22 14:26:59 mhale (572) mhale (572) 8 mhale (572) Definition msc 46B99 completely continuous
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 12, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774280190467834, "perplexity": 206.25375476518008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00030.warc.gz"}
https://www.maplesoft.com/support/help/errors/view.aspx?path=networks(deprecated)%2Frandom
networks(deprecated)/random - Maple Help networks random creates a random graph on a given number of vertices Calling Sequence G:=random(n) G:=random(n, m) G:=random(n, 'prob'=x) Parameters G - graph or network n - number of vertices required m - number of edges (optional) x - real number in [0,1] (optional, specify only if m not used) Description • Important: The networks package has been deprecated.Use the superseding command GraphTheory[RandomGraphs] instead. • This procedure generates a variety of different types of random graphs. If only one argument n is specified, this is taken to be the number of vertices, and each edge of the complete graph on n vertices is assumed to be present with a probability of 1/2. • Extra arguments can be used to specify the number of edges m or the specific independent probability x with which a given edges occurs. • If m is specified, a random n-vertex m-edge simple undirected graph is constructed. • If x is specified, each undirected edge is chosen for inclusion "independently" with probability x. • This routine is normally loaded by using the command with(networks), but it may also be referenced by using the full name networks[random](...). Examples Important: The networks package has been deprecated.Use the superseding command GraphTheory[RandomGraphs] instead. > $\mathrm{with}\left(\mathrm{networks}\right):$ > $G≔\mathrm{random}\left(4\right):$ > $\mathrm{ends}\left(G\right)$ $\left\{\left\{{1}{,}{2}\right\}{,}\left\{{1}{,}{3}\right\}{,}\left\{{1}{,}{4}\right\}{,}\left\{{2}{,}{4}\right\}\right\}$ (1) > $G≔\mathrm{random}\left(4,\mathrm{prob}=1\right):$ > $\mathrm{ends}\left(G\right)$ $\left\{\left\{{1}{,}{2}\right\}{,}\left\{{1}{,}{3}\right\}{,}\left\{{1}{,}{4}\right\}{,}\left\{{2}{,}{3}\right\}{,}\left\{{2}{,}{4}\right\}{,}\left\{{3}{,}{4}\right\}\right\}$ (2) > $G≔\mathrm{random}\left(5,8\right):$ > $\mathrm{ends}\left(G\right)$ $\left\{\left\{{1}{,}{2}\right\}{,}\left\{{1}{,}{3}\right\}{,}\left\{{1}{,}{4}\right\}{,}\left\{{1}{,}{5}\right\}{,}\left\{{2}{,}{4}\right\}{,}\left\{{2}{,}{5}\right\}{,}\left\{{3}{,}{4}\right\}{,}\left\{{3}{,}{5}\right\}\right\}$ (3)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556840062141418, "perplexity": 1162.6380546907506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00519.warc.gz"}
http://tex.stackexchange.com/questions/104453/in-math-mode-line-non-math-type
# In “math-mode line” non-math type I'd like to type non-math text inside of the bracket math text lines, or know of a better way to do what I'm trying to do. In the following, I don't literally type \nomath. I don't know what I need to type there in order to make this work. This is what I type (the dashes are disappearing, I don't know why): \begin{document} $K=L$ $A = B \nomath or \nomath A_1 = B_1$ \end{document} This is what I get: K=L A=B or A_1=B_1 This is what I want: K=L A=B or A_1=B_1 What I'd like for the or word to be in non-math text mode, with everything around it following the normal rules according to the dashes and brackets. Can anyone help me? - Welcome to TeX.sx! A tip: If you indent lines by 4 spaces, they'll be marked as a code sample. You can also highlight the code and click the "code" button (with "{}" on it). –  Jubobs Mar 26 '13 at 20:43 Thanks for editing the post so it looks nicer, and where I can find more posting tips. –  rod Mar 26 '13 at 22:19 If you use the amsmath package, you should be able to write something like $A = B \text{ or } A_1 = B_1$ You'll have to manually add space around the or. You can do it as I did or you can do it as: $A = B \quad \text{or} \quad A_1 = B_1$ EDIT 1 Here's a complete MWE: \documentclass{article} \usepackage{amsmath} \pagestyle{empty} \begin{document} $A = B \text{ or } A_1 = B_1$ $A = B \quad \text{or} \quad A_1 = B_1$ \end{document} EDIT 2 A comparison of \mbox{...} vs \text{...} $A_{\mbox{Hi}} \text{ vs } A_{\text{Hi}}$ If you want to know more about the differences between these, you should probably post another question. - Thanks for the answer. Although I couldn't get any of your suggestions to work, you gave me the terminology I needed to find what I could get to work, which was using "\mbox{ or }". –  rod Mar 26 '13 at 22:03 Did you use \usepackage{amsmath} in the preamble to your document? –  A.Ellett Mar 26 '13 at 22:30 Maybe? I guess the preamble isn't the part before the \documentclass{article}, but after. When I put \usepackage{amsmath} before \documentclass{article}, \text{or} doesn't work as I desired. After, and it does. On a related note, is there any effective difference between \mbox{or} and \text{or}? –  rod Mar 27 '13 at 4:42 Usually, the \documentclass{...} declaration should come at the beginning of the document. There are exceptions to this. But if you are new to LaTeX, that might be a good rule of thumb to initially follow. Yes, there is a difference between \mbox{or} and \text{or}. In brief, \text{...} is sensitive to its context and chooses the correct font size. See 2nd edit above. –  A.Ellett Mar 27 '13 at 5:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842109203338623, "perplexity": 1381.491722195629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862207.44/warc/CC-MAIN-20150124161102-00160-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/100517-inequalities-q.html
# Math Help - Inequalities Q 1. ## Inequalities Q hey, can you guys give me a hand on this question: Prove $\sqrt[n]{n!}<\sqrt[n+1]{(n+1)!}$ thanks 2. Originally Posted by vuze88 hey, can you guys give me a hand on this question: Prove $\sqrt[n]{n!}<\sqrt[n+1]{(n+1)!}$ thanks $\sqrt[n]{n!}<\sqrt[n+1]{(n+1)!}\Leftrightarrow\frac{\ln 1+...+\ln n}{n}<\frac{\ln 1+...+\ln n+1}{n+1}$ $\Leftrightarrow \ln 1+...+\ln n.The last inequality is obvious.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392723441123962, "perplexity": 1892.2550293102497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00280-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/104786/contest-problem-on-domain-and-range-of-square-root-function?answertab=oldest
# Contest problem on domain and range of square root function I have no clue how to do this problem: Let $f(x)=\sqrt{ax^2+bx}$. For how many real values $a$ is there at least one positive real value of $b$ for which the domain of $f$ and the range of $f$ are the same set? The answer is two but what is the complete solution? - Have you tried just computing all the cases? Note that $a=0$ is one of the cases that works ; in that case since $b > 0$ the image and the range are the non-negative real numbers. –  Patrick Da Silva Feb 2 '12 at 1:25 @PatrickDaSilva - just try to figure out the cases cause me headache –  Victor Feb 2 '12 at 1:29 There is a smart way to figure out all the cases but to get rid of some very quickly ; look at Lopsy's answer. –  Patrick Da Silva Feb 2 '12 at 1:32 First off, $ax^2+bx$ is a quadratic, and the domain of $f$ is the region where the quadratic is non-negative. If this set includes any negative numbers, it's game over: $f(x)$ can never be negative, so the range won't match the domain. Therefore, $ax^2 + bx < 0$ when $x < 0$. It immediately follows that $a$ is negative or zero. If $a$ is zero, we win: setting $b=1$ (or your favorite positive number) works. So $a=0$ is one solution to the problem. Otherwise, $a$ is negative. Solving the quadratic, the solutions are $0$ and $-b/a$, so the domain of $f$ is $[0, -b/a]$. This has to include only positive numbers, so b must be positive. Now, the quadratic ranges from $f(0)$ to its maximum $f(-b/2a)$, since $-b/2a$ is right between the zeroes, so the range of the function $f$ is $[0, b\sqrt{-1/4a}]$. Therefore, since the range and domain are equal, $-b/a = b\sqrt{-1/4a} \rightarrow -1/a = \sqrt{-1/4a}$, and the unique solution of this latter equation is $a=-4$. In summary, there are two solutions, $a=0$ and $a=-4$. - I just added some math typesetting. Very good answer ; you thought it through properly. –  Patrick Da Silva Feb 2 '12 at 1:34 lopsy- i think you make some mistake on the sentence that after the first paragraph accoding to this: artofproblemsolving.com/Wiki/index.php/2003_AMC_12A_Problems/… –  Victor Feb 2 '12 at 4:02 What exactly is the mistake? –  Lopsy Feb 2 '12 at 12:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631103038787842, "perplexity": 254.11353398152687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.math.ucdavis.edu/~strohmer/research/gabor/gaborintro/node3.html
## Local time-frequency analysis and short time Fourier transform Time-frequency analysis plays a central role in signal analysis. Already long ago it has been recognized that a global Fourier transform of a long time signal is of little practical value to analyze the frequency spectrum of a signal. High frequency bursts for instance cannot be read off easily from . Transient signals, which are evolving in time in an unpredictable way (like a speech signal or an EEG signal) necessitate the notion of frequency analysis that is local in time. In 1932, Wigner derived a distribution over the phase space in quantum mechanics [Wig32]. It is a well-known fact that the Wigner distribution of an -function is the Weyl symbol of the orthogonal projection operator onto [Fol89]. Some 15 years later, Ville, searching for an instantaneous spectrum'' - influenced by the work of Gabor - introduced the same transform in signal analysis [Vil48]. Unfortunately the non-linearity of the Wigner distribution causes many interference phenomena, which makes it less attractive for many practical purposes [Coh95]. A different approach to obtain a local time-frequency analysis (suggested by various scientists, among them Ville), is to cut the signal first into slices, followed by doing a Fourier analysis on these slices. But the functions obtained by this crude segmentation are not periodic, which will be reflected in large Fourier coefficients at high frequencies, since the Fourier transform will interpret this jump at the boundaries as a discontinuity or an abrupt variation of the signal. To avoid these artifacts, the concept of windowing has been introduced. Instead of localizing by means of a rectangle function, one uses a smooth window-function for the segmentation, which is close to near the origin and decays towards zero at the edges. Popular windows which have been proposed for this purpose are associated with the names Hamming, Hanning, Bartlett, or Kaiser. If the window is in (i.e. infinitely differentiable) one finds that for any -function the localized Fourier coefficients show at least polynomial decay in the frequency direction. The resulting local time-frequency analysis procedure is referred to as (continuous) short time Fourier transform or windowed Fourier transform. It is schematically represented in Figure 3. In mathematical notation, the short time Fourier transform (STFT) of an arbitrary function with respect to a given (often compactly supported) window is defined as The function can be recovered from its STFT via the inversion formula It is possible to derive the inversion formula (the integral is understood in the mean square sense) from the following formula, which itself can be seen as an immediate consequence of Moyal's formula. In particular it implies that for a normalized window satisfying the mapping is an isometric embedding from into The STFT and the spectrogram have become standard tools in signal analysis. However the STFT has also its disadvantages, such as the limit in its time-frequency resolution capability, which is due to the uncertainty principle. Low frequencies can be hardly depicted with short windows, whereas short pulses can only poorly be localized in time with long windows, see also Figure 4 for an illustration of this fact. These limitations in the resolution were one of the reasons for the invention of wavelet theory. Another disadvantage for many practical purposes is the high redundancy of the STFT. This fact suggests to ask, if we can reduce this redundancy by sampling . The natural discretization for is where are fixed, and range over , i.e., to sample over a time-frequence lattice of the form . Large values of give a coarse discretization, whereas small values of lead to a dense sampled STFT. Using the operator notation and for translation and modulation, respectively, we can express the STFT of with respect to a given window as Hence the sampled STFT of a function can also be interpreted as the set of inner products of with the members of the family with discrete labels in the lattice . It is obvious that the members of this family are constructed in the same way as the representation functions in Gabor's series expansion. Thus the sampled STFT is also referred to as Gabor transform. 0 Thus the linear mapping where (4) is also referred to as Gabor transform or Gabor analysis mapping, in analogy to the Gabor synthesis mapping defined in (0.6). Two questions arise immediately with the discretization of the STFT • Do the discrete STFT coefficients completely characterize (i.e., does for all imply that )? • A stronger formulation is: Can we reconstruct in a numerically stable way from the ? Recall, that in connection with the Gabor expansion of a function we have asked • Can any function in be written as superposition of the elementary building blocks? • How can we compute the expansion coefficients in the series ? It turns out that the question of recovering from the samples (at lattice points) of its STFT with respect to the window is actually dual to the problem of finding coefficients for the Gabor expansion of with atom , using the same lattice to generate the time-frequency shifts of . Both problems can be successfully and mathematically rigorously attacked using the concept of frames and surprisingly for both questions the same dual'' Gabor atom has to be used.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222780466079712, "perplexity": 463.65333611739715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281978.84/warc/CC-MAIN-20160524002121-00067-ip-10-185-217-139.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/258224/how-do-i-find-the-variance-of-t/258230
# How do I find the variance of $T$? I randomly chose either Alice or Bob to go catch a penguin for me with equal probabilities of chosing either person. Let $I=0$ if I chose Alice and $I=1$ if I chose Bob. Alice can catch a penguin in time $T_1 \sim Exponential(\lambda_1)$. Bob can catch a penguin in time $T_2 \sim Exponential(\lambda_2)$. Let $T$ be the time it takes for a penguin to be caught. What is the variance of $T$? I started this problem with: $$Var(T) = E(Var(T|I)) + Var(E(T|I))$$ However, I'm not sure of how to calculate either $E(Var(T|I))$ or $Var(E(T|I))$. - With regard to the question itself, $\text{var}(T\mid I)$ is a random variable that takes on two values depending on whether $I$ is $0$ or $1$. What is its average value? Similarly, $E[T\mid I]$ is a random variable that takes on two values. What is its average? What is its variance? You will get a lot farther if you write down the two values mentioned above (in the two cases) explicitly and then proceed, instead of trying to deal with mystical magical formulas in generality. – Dilip Sarwate Dec 13 '12 at 21:00 HINT: Start by computing $E(T|I)$ and $Var(T|I)$, these are just the expectation and variance of an exponentialy distributed variable thus $$E(T|I)=\frac{1}{\lambda_{I+1}} \text{ and } Var(T|I)=\frac{1}{(\lambda_{I+1})^2} \; .$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9629589319229126, "perplexity": 214.04733173029263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117914.56/warc/CC-MAIN-20160428161517-00194-ip-10-239-7-51.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/453368/what-does-it-mean-for-a-unit-vector-to-have-a-magnitude-of-1
# What does it mean for a unit vector to have a magnitude of 1? Imagine a Cartesian coordinate system whose origin is associated with two unit vectors, ê and â, in a 2D-space. Now, let 0.5 cm be the unit of length in this coordinate system. The magnitude of a unit vector is, by definition, 1; Does this mean that the magnitude of one of our unit vectors is 1 unit of length? Or, in other words, does this mean that $$\left | â \right |$$ = $$\left | ê \right |$$ = 0.5cm? The basis vectors are dimensionless quantities with magnitude 1. You create dimensional vectors to represent positions, velocities, accelerations, forces, etc. by multiplying each basis vector by a dimensional scalar and then adding together. For example, $$\mathbf{r}=(2\,\text{cm})\hat{\mathbf{e}}+(3\,\text{cm})\hat{\mathbf{a}}$$ In other words, the components of a vector carry its dimensions. That way, the same basis vectors can be used to represent all kinds of different vectorial physical quantities. You have defined a unit of length to be equal to $$0.5\,\rm cm$$ which we can call $$1\,\rm ash$$. Suppose you move $$5\,\rm ash$$ in the $$\hat e$$ direction. The displacement is $$5\,\rm ash\, \hat e$$ which is $$2.5\,\rm cm\, \hat e$$ This means that $$|\hat e| =1$$ irrespective of what you are describing and whatever the units that you are using.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492260813713074, "perplexity": 197.52095882008996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00443.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-p-prerequisites-fundamental-concepts-of-algebra-exercise-set-p-4-page-61/88
## College Algebra (6th Edition) $81x^4-256$ $$(3x+4)(3x-4)(9x^2+16)$$ To multiply the sum of two terms by the difference of two terms, use the formula: $(a+b)(a-b)=a^2-b^2$: $$=[(3x)^2-(4)^2](9x^2+16)$$ $$=(9x^2-16)(9x^2+16)$$ Another product of the sum and difference of two terms! $$=(9x^2)^2-(16)^2$$ $$=81x^4-256$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603917360305786, "perplexity": 399.21581091037876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612502.45/warc/CC-MAIN-20170529165246-20170529185246-00137.warc.gz"}
http://hal.inria.fr/inserm-00724974
inserm-00724974, version 1 ## Robust Cerebral Blood Flow Map Estimation in Arterial Spin Labeling Camille Maumet () 1, Pierre Maurel () 1, Jean-Christophe Ferré 12, Christian Barillot () 1 International Workshop on Multimodal Brain Image Analysis (MBIA), held in conjunction with MICCAI 2012 (2012) 215-224 Résumé : Non-invasive measurement of Cerebral Blood Flow (CBF) is now feasible thanks to the introduction of Arterial Spin Labeling (ASL) Magnetic Resonance Imaging (MRI) techniques. To date, the low signal-to-noise ratio of ASL gives us no option but to repeat the acquisition in order to accumulate enough data to get a reliable signal. Perfusion signal is usually extracted by averaging across the repetitions. However, due to its zero breakdown point, the sample mean is very sensitive to outliers. A single outlier can thus have strong detrimental effects on the sample mean estimate. In this paper, we propose to estimate robust ASL CBF maps by means of M-estimators to overcome the deleterious effects of outliers. The behavior of this method is compared to z-score thresholding as recommended in [8]. validation on simulated and real data is provided. Quantitative validation is undertaken by measuring the correlation with the most widespread technique to measure perfusion with MRI: Dynamic Susceptibility weighted Contrast (DSC). • Domaine : Sciences du Vivant/Neurosciences • inserm-00724974, version 1 • oai:www.hal.inserm.fr:inserm-00724974 • Contributeur : • Soumis le : Mardi 16 Octobre 2012, 14:22:05 • Dernière modification le : Mercredi 27 Novembre 2013, 09:44:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226063847541809, "perplexity": 2283.829266127051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273513.48/warc/CC-MAIN-20140728011753-00276-ip-10-146-231-18.ec2.internal.warc.gz"}
https://en.wikiversity.org/wiki/Topic:Numerical_Analysis/stability_of_RK_methods
# Numerical Analysis/stability of RK methods ## Definitions Let ${\displaystyle \tau _{i}}$ be the local truncation error, and k is the number of time steps, then: 1. The numerical method is consistent with a differential equation if ${\displaystyle \lim _{h\to 0}\operatorname {max} |\tau _{i}|=0}$ over ${\displaystyle 1\leqslant i\leqslant k}$. according to this definition w:Euler's method is consistent. 2. A numerical method is said be convergent with respect to differential equation if ${\displaystyle \lim _{h\to 0}|x(t_{i})-y_{i}|=0}$ over ${\displaystyle 1\leqslant i\leqslant k}$; where ${\displaystyle y_{i}}$ is the approximation for ${\displaystyle x(t_{i})}$. 3. A numerical method is stable if small change in the initial conditions or data, produce a correspondingly small change in the subsequent approximations. Theorem: For an initial value problem ${\displaystyle x'=f(t,x)}$ with ${\displaystyle t\in [t_{0},t_{0}+\alpha ]}$ and certain initial conditions, ${\displaystyle (t_{0},x_{0})}$, let us consider a numerical method of the form ${\displaystyle (y_{0}=x_{0})}$ and ${\displaystyle y_{i+1}=y_{i}+h\phi (t_{i},y_{i},h)}$. If there exists a value ${\displaystyle h>0}$ such that it is continuous on the iterative domain, ${\displaystyle \Omega }$ and if there exists an ${\displaystyle L>0}$ such that ${\displaystyle |\phi (t,y,h)-\phi (t,y^{*},h)|\leqslant L|y-y^{*}|}$ for all ${\displaystyle (t,y,h),\,(t,y^{*},h)\in \Omega }$, the method fulfills the w:Lipschitz condition, and it is stable and convergent if and only if it is consistent. That is, ${\displaystyle \phi (t,x,0)=f(t,x)}$ for all ${\displaystyle t\in \Omega }$. For a similar argument, one can deduce the following for multi- step methods: 1. The method is stable if and only if all roots, ${\displaystyle \lambda }$, of the characteristic polynomial satisfy ${\displaystyle |\lambda |\leqslant 1}$, and any root of ${\displaystyle |\lambda |=1}$ is simple root. 2. one more result is that if the method is consistent with the differential equation, the method is stable if and only if it it is convergent. see [1] ### stability polynomials of Runge-Kutta methods The w:Runge–Kutta methods are very useful in solving systems of differential equations, it has wide applications for the scientists and the engineers, as well as for the economical models, the recognized with their practical accuracy where we can use and get very good results and approximations when solving an ODE problem, RK has the general form :${\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i}}$, where ${\displaystyle k_{1}=hf(t_{n},y_{n}),\,}$ ${\displaystyle k_{2}=hf(t_{n}+c_{2}h,y_{n}+a_{21}k_{1}),\,}$ ${\displaystyle k_{3}=hf(t_{n}+c_{3}h,y_{n}+a_{31}k_{1}+a_{32}k_{2}),\,}$ ${\displaystyle \vdots }$ ${\displaystyle k_{s}=hf(t_{n}+c_{s}h,y_{n}+a_{s1}k_{1}+a_{s2}k_{2}+\cdots +a_{s,s-1}k_{s-1}).}$ such that s is the number of stages, aij (for 1 ≤ j < is), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). #### Example:finding the stability polynomial for RK4's methods for RK4" case${\displaystyle 2_{a}}$", which characterizes by ${\displaystyle c_{2}={\frac {-1}{4}}}$ which has the form: the stability region is found by applying the method to the linear test equation ${\displaystyle y'=\lambda y}$ ${\displaystyle y_{n+1}=y_{n}+{\frac {1}{6}}k_{1}+0k_{2}+{\frac {2k_{3}}{3}}+{\frac {k_{4}}{6}}}$ the stability region is found by applying the method to the linear test equation ${\displaystyle y'=\lambda y}$ ${\displaystyle \displaystyle \ k_{1}=hf(t_{n},y_{n})}$ ${\displaystyle k_{2}=hf(t_{n}+{\frac {h}{2}},y_{n}-{\frac {k_{1}}{2}})}$ ${\displaystyle k_{3}=hf(t_{n}+{\frac {h}{2}},y_{n}+{\frac {3k_{1}}{4}}-{\frac {k_{2}}{4}})}$ ${\displaystyle k_{4}=hf(t_{n}+h,y_{n}-2k_{1}+k_{2}+2k_{3})}$ using the linearized equation ${\displaystyle f(t,y)=\lambda y}$, and considering ${\displaystyle {\hat {h}}=h\lambda }$ , we get; ${\displaystyle k_{1}={\hat {h}}y_{n}}$ ${\displaystyle k_{2}={\hat {h}}(1-{\frac {\hat {h}}{2}})y_{n}}$ ${\displaystyle k_{3}={\hat {h}}(1+{\frac {3{\hat {h}}}{4}}-{\frac {\hat {h}}{4}}(1-{\frac {\hat {h}}{2}}))y_{n}}$ ${\displaystyle k_{4}={\hat {h}}(1-2{\hat {h}}+{\hat {h}}(1-{\frac {\hat {h}}{2}})+2{\hat {h}}(1+{\frac {3{\hat {h}}}{4}}-{\frac {\hat {h}}{4}}(1-{\frac {\hat {h}}{2}})))))y_{n}}$ substitute these back in ${\displaystyle y_{n+1}}$, yields ${\displaystyle y_{n+1}=[1+({\hat {h}})+{\frac {({\hat {h}})^{2}}{2}}+{\frac {({\hat {h}})^{3}}{6}}+{\frac {({\hat {h}})^{4}}{24}}]y_{n}=R({\hat {h}})y_{n}}$ and so the characteristic polynomial ${\displaystyle P(z)=z-R(z);z={\hat {h}}}$ for the absolute stability region for this method, set |R(z)|<1, and so we get the region in figure 1. case${\displaystyle 2_{a}}$ the table below shows the final forms for the stability function for different forms of RK4, these RK4's are different in the values of ${\displaystyle b_{j}}$, and they are fullfilling the consistencty requirement for the method i.e :${\displaystyle \sum _{j=1}^{i-1}a_{ij}=c_{i}\ \mathrm {for} \ i=2,\ldots ,s.}$ case# ${\displaystyle (b_{1},b_{2},b_{3},b_{4})}$ stability function caseI ${\displaystyle ({\frac {1}{8}},{\frac {3}{8}},{\frac {3}{8}},{\frac {1}{8}})}$ ${\displaystyle 1+z+{\frac {z^{2}}{2}}+{\frac {z^{3}}{6}}+{\frac {z^{4}}{24}}}$ caseII_a ${\displaystyle ({\frac {1}{6}},0,{\frac {2}{3}},{\frac {1}{6}})}$ ${\displaystyle 1+z+{\frac {z^{2}}{2}}+{\frac {z^{3}}{6}}+{\frac {z^{4}}{24}}}$ caseII_b ${\displaystyle ({\frac {1}{6}},0,{\frac {2}{3}},{\frac {1}{6}})}$ ${\displaystyle 1+z+{\frac {5z^{2}}{6}}-{\frac {17z^{3}}{12}}-{\frac {2z^{4}}{24}}+{\frac {z^{5}}{12}}}$ caseIII ${\displaystyle ({\frac {1}{12}},{\frac {2}{3}},{\frac {1}{12}},{\frac {1}{6}})}$ ${\displaystyle 1+{\frac {11z}{12}}+{\frac {5z^{2}}{12}}+{\frac {z^{3}}{6}}+{\frac {z^{4}}{24}}}$ caseIV ${\displaystyle ({\frac {1}{6}},0,{\frac {2}{3}},{\frac {1}{6}})}$ ${\displaystyle 1+z+{\frac {z^{2}}{2}}+{\frac {z^{3}}{6}}+{\frac {z^{4}}{24}}}$ classical Rk ${\displaystyle ({\frac {1}{6}},{\frac {1}{3}},{\frac {1}{3}},{\frac {1}{6}})}$ ${\displaystyle 1+z+{\frac {z^{2}}{2}}+{\frac {z^{3}}{6}}+{\frac {z^{4}}{24}}}$ see [2] to find the table ## Plotting the stability region In order to plot the stability region, we can set the stability function to be bounded by 1 and solve for the values of z, then draw z in the complex plane. Since R(z) is the unit circle in the complex plane, each point on the boundary can be represented as ${\displaystyle e^{i\theta }}$ and so by changing ${\displaystyle \theta }$ over the interval${\displaystyle [0,2\pi ]}$, we can draw the boundaries of that region. The following w:OCTAVE/Matlab code does this by plotting contour curves until reaching the boudaries: [x,y] = meshgrid(-6:0.01:6,-6:0.01:6); z = x+i*y; R = 1+z+0.5*z.^2+(1/6)*z.^3+(1/24)*z.^4; zlevel = abs(R); contour(x,y,zlevel,[1 1]); The figure at right shows the absolute stability regions for RK4 cases which is tabulated above[3] ### References • Eberly, David (2008), stability analysis for systems of defferential equation.[4] • Ababneh, Osama; Ahmad, Rokiah; Ismail, Eddie (2009), "on cases of fourth-order Runge-Kutta methods", European journal of scientific Research.[5] • Mathews, John; Fink, Kurtis (1992), numerical methods using matlab. • Stability of Runge-Kutta Methods [6]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 65, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062554240226746, "perplexity": 695.5127015967313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00492.warc.gz"}
https://www.physicsforums.com/threads/electric-potential-of-spheres.128871/
Electric Potential of Spheres 1. Aug 14, 2006 Saketh I have two problems that confuse me for similar reasons. The first one: Find the potential $\varphi$ of an uncharged conducting sphere outside of which a point charge $q$ is located at a distance $l$ from the sphere's center. The second one: A system consists of two concentric conducting spheres, with the inside sphere of radius $a$ carrying a positive charge $q_1$. What charge $q_2$ has to be deposited on the outside sphere of radius $b$ in order to reduce the potential of the inside sphere to zero?​ There's more to the second problem, but this first part confused me enough. For the first one, I originally went about it by defining the inner sphere's radius as $R$, and then using the law of cosines to find the distance between the point charge and the surface of the sphere as a function of $$\theta[/itex]. This, however, ended in failure, with undefined results. Then I thought, "maybe the potential of the sphere is located at the center of the sphere?" So I wrote down [tex]\varphi_0 = \varphi_q + \varphi_s$$. Since $$\varphi_q$$ is $$\frac{q}{4\pi \epsilon_0 l}$$ if the potential is at the center of the sphere, and $$\varphi_s$$ is $$\frac{0}{4\pi \epsilon_0 R}$$, the potential of the sphere must be $$\frac{q}{4\pi \epsilon_0 l}$$. This is the right answer, but I am still confused - I thought the potential of a sphere should be treated as if it were on the sphere's surface, not as if it were at the center? For the second one, I still wasn't sure if potential should be treated as surface or center, so I calculated blindly. If the potentials are at the center, as gave me the correct answer for the first problem, then in order for the potential of the center sphere to become zero the potential of the outside sphere must cancel it out. $$\varphi_a = -\varphi_b$$ After integrating those expressions from their differential parts, I concluded that $$\frac{-q_1 a}{b} = q_2$$, which is the wrong answer. I then tried it with Gauss's Law, but I still got the wrong answer. I'm probably messing up because I don't understand electric potential as it applies to spheres. In summary, I have two main questions: 1. Did I do the first problem correctly, and, if so, why is it correct? 2. How am I supposed to set up the second problem? 2. Aug 16, 2006 Saketh I've figured out the first question, so I don't need help with that one, but the second one (with the two spheres) still confuses me. Similar Discussions: Electric Potential of Spheres
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601571559906006, "perplexity": 163.00008118840194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811243.29/warc/CC-MAIN-20180218003946-20180218023946-00165.warc.gz"}
https://www.groundai.com/project/robust-a-posteriori-error-estimation-for-finite-element-approximation-to-hcurl-problem/
A Appendix # Robust A Posteriori Error Estimation for Finite Element Approximation to H(curl) Problem 1 ## Abstract In this paper, we introduce a novel a posteriori error estimator for the conforming finite element approximation to the problem with inhomogeneous media and with the right-hand side only in . The estimator is of the recovery type. Independent with the current approximation to the primary variable (the electric field), an auxiliary variable (the magnetizing field) is recovered in parallel by solving a similar problem. An alternate way of recovery is presented as well by localizing the error flux. The estimator is then defined as the sum of the modified element residual and the residual of the constitutive equation defining the auxiliary variable. It is proved that the estimator is approximately equal to the true error in the energy norm without the quasi-monotonicity assumption. Finally, we present numerical results for several interface problems. ## 1 Introduction Let be a bounded and simply-connected polyhedral domain in with boundary and , and let be the outward unit vector normal to the boundary. Denote by the electric field, we consider the following model problem originated from a second order hyperbolic equation by eliminating the magnetic field in Maxwell’s equations: ⎧⎪ ⎪⎨⎪ ⎪⎩∇×(μ−1∇×u)+βu=f, in Ω,u×n=gD, on ΓD,(μ−1∇×u)×n=gN, % on ΓN, (1.1) where is the curl operator; the , , and are given vector fields which are assumed to be well-defined on , , and , respectively; the is the magnetic permeability; and the depends on the electrical conductivity, the dielectric constant, and the time step size. Assume that the coefficients and are bounded below 0<μ−10≤μ−1(x)and0<β0≤β(x) for almost all . The a posteriori error estimation for the conforming finite element approximation to the problem in (1.1) has been studied recently by several researchers. Several types of a posteriori error estimators have been introduced and analyzed. These include residual-based estimators and the corresponding convergence analysis (explicit [3, 10, 11, 12, 27, 29, 34], and implicit [18]), equilibrated estimators [4], and recovery-based estimators [6, 28]. There are four types of errors in the explicit residual-based estimator (see [3]). Two of them are standard, i.e., the element residual, and the interelement face jump induced by the discrepancy induced by integration by parts associated with the original equation in (1.1). The other two are also the element residual and the interelement face jump, but associated with the divergence of the original equation: , where is the divergence operator. These two quantities measure how good the approximation is in the kernel space of the curl operator. Recently, the idea of the robust recovery estimator explored in [7, 8] for the diffusion interface problem has been extended to the interface problem in [6]. Instead of recovering two quantities in the continuous polynomial spaces like the extension of the popular Zienkiewicz-Zhu (ZZ) error estimator in [28], two quantities related to and are recovered in the respective - and -conforming finite element spaces. The resulting estimator consists of four terms similar to the residual estimator in the pioneering work [3] on this topic by Beck, Hiptmair, Hoppe, and Wohlmuth: two of them measure the face jumps of the tangential components and the normal component of the numerical approximations to and , respectively, and the other two are element residuals of the recovery type. All existing a posteriori error estimators for the problem assume that the right-hand side is in or divergence free. This assumption does not hold in many applications (e.g. the implicit marching scheme mentioned in [19]). Moreover, two terms of the estimators are associated with the divergence of the original equation. In the proof, these two terms come to existence up after performing the integration by parts for the irrotational gradient part of the error, which lies in the kernel of the curl operator. One of the key technical tools, a Helmholtz decomposition, used in this proving mechanism, relies on being in , and fails if . In [12], the assumption that is weakened to being in the piecewise space with respect to the triangulation, at the same time, the divergence residual and norm jump are modified to incorporate this relaxation. Another drawback of using Helmholtz decomposition on the error is that it introduces the assumption of the coefficients’ quasi-monotonicity into the proof pipeline. An interpolant with a coefficient independent stability bound is impossible to construct in a “checkerboard” scenario (see [32] for diffusion case, and [6] for case). To gain certain robustness for the error estimator in the proof, one has to assume the coefficients distribution is quasi-monotone. However, in an earlier work of Chen, Xu, and Zou ([11]), it is shown that numerically this quasi-monotonicy assumption is more of an artifact introduced by the proof pipeline, at least for the irrotational vector fields. As a result, we conjecture that the divergence related terms should not be part of an estimator if it is appropriately constructed. In Section 5, some numerical justifications are presented to show the unnecessity of including the divergence related terms. The pioneering work in using the dual problems for a posteriori error estimation dates back to [30]. In [30], Oden, Demkowicz, Rachowicz, and Westermann studied the a posteriori error estimation through duality for the diffusion-reaction problem. The finite element approximation to a dual problem is used to estimate the error for the original primal problem (diffusion-reaction). The result shares the same form to the Prague-Synge identity ([33]) for diffusion-reaction problem. The method presented in this paper may be viewed as an extension of the duality method in [30] to the interface problem. The auxiliary magnetizing field introduced in Section 3 is the dual variable resembling the flux variable in [30]. The connection is illustrated in details in Section 4.1. Later, Repin ([31]) proposes a functional type a posteriori error estimator of problem, which can be viewed as an extension of the general approach in [30]. Repin et al ([26]) improve the estimate by assuming that the data is divergence free and the finite element approximation is in . In [31], the upper bound is established through integration by parts by introducing an auxiliary variable in an integral identity for . An auxiliary variable is recovered by globally solving an finite element approximation problem and is used in the error estimator. For the global lower bound, the error equation is solved globally in an conforming finite element space. Then the solution is inserted into the functional as the error estimator of which the maximizer corresponds to the solution to the error equation. The purpose of this paper is to develop a novel a posteriori error estimator for the conforming finite element approximation to the problem in (2.1) that overcomes the above drawbacks of the existing estimators, e.g. the Helmholtz decomposition proof mechanism, restricted by the assumption that or divergence free, which brings in the divergence related terms. Specifically, the estimator studied in this paper is of the recovery type, requires the right-hand side merely having a regularity of , and has only two terms that measure the element residual and the tangential face jump of the original equation. Based on the current approximation to the primary variable (the electric field), an auxiliary variable (the magnetizing field) is recovered by approximating a similar auxiliary problem. To this end, a multigrid smoother is used to approximate this auxiliary problem, which is independent of the primary equation and is performed in parallel with the primary problem. The cost is the same order of complexity with computing the residual-based estimator, which is much less than solving the original problem. An alternate route is illustrated as well in Section 3.2 by approximating a localized auxiliary problem. While embracing the locality, the parallel nature using the multigrid smoother is gone. The recovery through approximating localized problem requires the user to provide element residual and tangential face jump of the numerical magnetizing field based on the finite element solution of the primary equation. The estimator is then defined as the sum of the modified element residual and the residual of the auxiliary constitutive equation. It is proved that the estimator is equal to the true error in the energy norm globally. Moreover, in contrast to the mechanism of the proof using Helmholtz decomposition mentioned previously, the decomposition is avoided by using the joint energy norm. As a result, the new estimator’s reliability does not rely on the coefficients distribution (Theorem 4.2). Meanwhile, in this paper, the method and analysis extend the functional-type error estimator in [31] to a more pragmatic context by including the mixed boundary conditions, and furthermore, the auxiliary variable is approximated by a fast multigrid smoother, or by solving a localized problem on vertex patches, to avoid solving a global finite element approximation problem. Lastly, in order to compare the new estimator introduced in this paper with existing estimators, we present numerical results for intersecting interface problems. When , the mesh generated by our indicator is much more efficient than those by existing indicators (Section 5). ## 2 Primal Problem and The Finite Element Approximation Denote by the space of the square integrable vector fields in equipped with the standard norm: , where denotes the standard inner product over an open subset , when , the subscript is dropped for and . Let H(curl;Ω):={v∈L2(Ω):∇×v∈L2(Ω)}, which is a Hilbert space equipped with the norm ∥v∥H(curl)=(∥v∥2+∥∇×v∥2)1/2. Denote its subspaces by HB(curl;Ω):={v∈H(curl;Ω):v×n=gB on ΓB}and\vboxto−0.43pt∘\vssHB(curl;Ω):={v∈HB(curl;Ω):gB=0} for or . For any , multiplying the first equation in (1.1) by a suitable test function with vanishing tangential part on , integrating over the domain , and using integration by parts formula for -regular vector fields (e.g. see [2]), we have (f,v) = (∇×(μ−1∇×u),v)+(βu,v) = (μ−1∇×u,∇×v)+(βu,v)−∫ΓNgN⋅vdS. Then the weak form associated to problem (1.1) is to find such that Aμ,β(u,v)=fN(v),∀v∈\vboxto−0.43pt∘\vssHD(curl;Ω), (2.1) where the bilinear and linear forms are given by Aμ,β(u,v)=(μ−1∇×u,∇×v)+(βu,v)andfN(v)=(f,v)+⟨gN,v⟩ΓN, respectively. Here, denotes the duality pair over . Denote by Missing dimension or its units for \mskip the “energy” norm induced by the bilinear form . ###### Theorem 2.1. Assume that , , and . Then the weak formulation of (1.1) has a unique solution satisfying the following a priori estimate Missing dimension or its units for \mskip (2.2) ###### Proof. For the notations and proof, see the Appendix A. ∎ ### 2.1 Finite Element Approximation For simplicity of the presentation, only the tetrahedral elements are considered. Let be a finite element partition of the domain . Denote by the diameter of the element . Assume that the triangulation is regular and quasi-uniform. Let where is the space of polynomials of degree less than or equal to . Let and be the spaces of homogeneous polynomials of scalar functions and vector fields. Denote by the first or second kind Nédélec elements (e.g. see [24, 25]) Missing dimension or its units for \hskip for , respectively, where the local Nédélec elements are given by NDk,1(K)={p+s:p∈Pk(K),s∈˜Pk+1(K) such that s⋅x=0} and NDk,2(K)={p+∇s:p∈NDk,1(K),s∈˜Pk+2(K)}. For simplicity of the presentation, we assume that both boundary data and are piecewise polynomials, and the polynomial extension (see [14]) of the Dirichlet boundary data as the tangential trace is in . Now, the conforming finite element approximation to (1.1) is to find such that Aμ,β(uT,v)=fN(v),∀v∈NDk∩\vboxto−0.43pt∘\vssHD(curl;Ω). (2.3) Assume that and are the solutions of the problems in (1.1) and (2.3), respectively, and that , (When the regularity assumption is not met, one can construct a curl-preserving mollification, see [16]), by the interpolation result from [24] Chapter 5 and Céa’s lemma, one has the following a priori error estimation: |||u−uT|||μ,β≤Chk+1(∥u∥Hk+1(Ω)+∥∇×u∥Hk+1(Ω)), (2.4) where is a positive constant independent of the mesh size . ## 3 Auxiliary Problem of Magnetizing Field ### 3.1 Recovery of the magnetizing field Introducing the magnetizing field σ=μ−1∇×u, (3.1) then the first equation in (1.1) becomes ∇×σ+βu=f, in Ω. (3.2) The boundary condition on may be rewritten as follows σ×n=gN, on ΓN. For any , multiplying equation (3.2) by , integrating over the domain , and using integration by parts and (3.1), we have (β−1f,∇×τ) = (β−1∇×σ,∇×τ)+(u,∇×τ) = (β−1∇×σ,∇×τ)+(∇×u,τ) +∫ΓD(u×n)⋅τds−∫ΓNu⋅(τ×n)ds = (β−1∇×σ,∇×τ)+(μσ,τ)+∫ΓDgD⋅τds. Hence, the variational formulation for the magnetizing field is to find such that Aβ,μ(σ,τ)=fD(τ),∀τ∈\vboxto−0.43pt∘\vssHN(curl;Ω), (3.3) where the bilinear and linear forms are given by Aβ,μ(σ,τ)=(β−1∇×σ,∇×τ)+(μσ,τ)andfD(τ)=(β−1f,∇×τ)−⟨gD,τ⟩ΓD, respectively. The natural boundary condition for the primary problem becomes the essential boundary condition for the auxiliary problem, while the essential boundary condition for the primary problem is now incorporated into the right-hand side and becomes the natural boundary condition. Denote the “energy” norm induced by by |||τ|||β,μ=√Aβ,μ(τ,τ). ###### Theorem 3.1. Assume that , , and . Then problem (3.3) has a unique solution satisfying the following a priori estimate |||σ|||β,μ≤∥β−1/2f∥+∥∥gD∥∥1/2,μ,β,ΓD+∥∥gN∥∥−1/2,β,μ,ΓN. (3.4) ###### Proof. The theorem may be proved in a similar fashion as Theorem 2.1. ∎ Similarly to that for the essential boundary condition, it is assumed that the polynomial extension of the Neumman boundary data as the tangential trace is in as well. Now, the conforming finite element approximation to (3.3) is to find such that Aβ,μ(σT,τ)=fD(τ),∀τ∈NDk∩\vboxto−0.43pt∘\vssHN(curl;Ω). (3.5) Assume that and are the solutions of the problems in (3.1) and (3.5), respectively, and that , , one has the following a priori error estimation similar to (2.4) Missing or unrecognized delimiter for \Big (3.6) The a priori estimate shows that heuristically, for the auxiliary magnetizing field , using the same order -conforming finite element approximation spaces with the primary variable may be served as the building blocks for the a posteriori error estimation. ### 3.2 Localization of the recovering procedure The localization of the recovery of for this new recovery shares similar methodology with the one used in the equilibrated flux recovery (see [4, 5]). However, due to the presence of the -term, exact equilibration is impossible due to several discrepancies: if and are in Nédélec spaces of the same order; If is used for and for , the inter-element continuity conditions come into the context in that , which has different inter-element continuity requirement than . Due to these two concerns, the local problem is approximated using a constraint -minimization. Let be the correction from to the true magnetizing field: . Now can be decomposed using a partition of unity: let be the linear Lagrange nodal basis function associated with a vertex , which is the collection of all the vertices, σΔ=∑z∈NhσΔz, with σΔz:=λzσΔ. (3.7) Denote . Let the vertex patch , where is the collection of vertices of element . Then the following local problem is what the localized magnetizing field correction satisfies: {μσΔz−∇×ez=−∇λz×e, in K⊂ωz,∇×σΔz+βez=λzrK+∇λz×(μ−1∇×e), in K⊂ωz, (3.8) with the following jump condition on each interior face , and boundary face : {[[σΔz×nF]]\raisebox−2.0pt$F$=−λzjt,F, on F∈Fz,σΔz×nF=0, on F⊂∂ωz. (3.9) The element residual is , and the tangential jump is . To find the correction, following piecewise polynomial spaces are defined: NDk−1(ωz)={τ∈L2(ωz):τ∣∣\raisebox−0.5pt$K$∈NDk(K),∀K⊂ωz}, (3.10) Wk(Fz)={τ∈L2(Fz):τ∣∣\raisebox−0.5pt$F$∈RTk(F),∀F∈Fz; τ∣∣\raisebox−0.5pt$Fi$⋅(tij×ni)=τ∣∣\raisebox−0.5pt$Fj$⋅(tij×nj),∀Fi,Fj∈Fz,∂Fi∩∂Fj=eij}, Hz={τ∈NDk−1(ωz):[[τ×nF]]\raisebox−2.0pt$F$=−¯jF,z∀F∈Fz}, and H0,z={τ∈Hz:τ×nF∣∣\raisebox−0.5pt$F$=0,∀F⊂ωz}. Here is the planar Raviart-Thomas space on a given face , of which the degrees of freedom can be defined using conormal of an edge with respect to the face normal . For example, is the unit tangential vector of edge joining face and , then the conormal vector of with respect to face is . can be viewed as the trace space of the broken Nédélec space . For detail please refer to Section 4 and 5 in [13]. To approximate the local correction for magnetizing field, and are projected onto proper piecewise polynomial spaces. To this end, let ¯¯¯rK,z:=∏K(λzrK),%and¯jF,z:=∏F(λzjt,F), (3.11) where is the projection onto the space , and is the projection onto the space . Dropping the uncomputable terms in (3.8), and using (3.9) as a constraint, the following local -minimization problem is to be approximated: Missing \left or extra \right (3.12) The hybridized problem associated with above minimization is obtained by taking variation with respect to of the functional by the tangential face jump as a Lagrange multiplier: J∗z(σΔz,\raisebox−0.5pt$T$,ξ) :=12∥∥μ1/2σΔz,\raisebox−0.5pt$T$−μ−1/2∇×ez∥∥2ωz (3.13) +12∥∥β−1/2(∇×σΔz,\raisebox−0.5pt$T$+βez−¯¯¯rK,z)∥∥2ωz +∑F∈Fz([[σΔz,\raisebox−0.5pt$T$×nF]]\raisebox−2.0pt$F$+¯jF,z,ξ)F. For any , using the fact that , and on 0= (μ1/2σΔz,\raisebox−0.5pt$T$−μ−1/2∇×ez,μ1/2τ)ωz (3.14) +(β−1/2∇×σΔz,\raisebox−0.5pt$T$+β1/2ez−β−1/2¯¯¯rK,z,β−1/2∇×τ)ωz +∑F∈Fz([[τ×nF]]\raisebox−2.0pt$F$,ξ)F = Missing or unrecognized delimiter for \bigr +∑F∈Fz([[τ×nF]]\raisebox−2.0pt$F$,ξ−ez)F−(β−1¯¯¯rK,z,∇×τ)ωz. As a result, the local approximation problem is: Missing or unrecognized delimiter for \big (3.15) wherein the local bilinear forms are defined as follows: Aβ,μ;z(σ,τ) :=(β−1∇×σ,∇×τ)ωz+(μσ,τ)ωz, (3.16) and Bz(τ,γ) :=∑F∈Fz([[τ×nF]]\raisebox−2.0pt$F$,γ)F. ###### Proposition 3.2. Problem (3.15) has a unique solution. ###### Proof. For a finite dimensional problem, uniqueness implies existence. It suffices to show that letting both the right hand sides be zeros results trivial solution. First by for any (direct implication of Proposition 4.3 and Theorem 4.4 in [13]), setting in the second equation of (3.15) immediately implies that . As a result, . Now let in the first equation of (3.15), since induces a norm in , . For , it suffices to show that on each if Extra open brace or missing close brace Using Theorem 4.4 in [13], if is non-trivial and satisfies above equation, there always exists a such that . As a result, , which is a contradiction. Thus, the local problem (3.15) is uniquely solvable. ∎ With the local correction to the magnetizing field, for all , computed above, let ˜σΔK,T=∑z∈N(K)σΔz,\raisebox−0.5pt$T$,and˜σΔT=∑z∈NσΔz,\raisebox−0.5pt$T$, (3.17) then the recovered magnetizing field is ˜σT=˜σΔT+μ−1∇×uT. (3.18) ## 4 A Posteriori Error Estimator In this section, we study the following a posteriori error estimator: η=(∑K∈Tη2K)1/2 where the local indicator is defined by ηK=(∥∥μ−1/2(μσT−∇×uT)∥∥2K+∥∥β−1/2(∇×σT+βuT−f)∥∥2K)1/2. (4.1) It is easy to see that η=(∥∥μ−1/2(μσT−∇×uT)∥∥2+∥∥β−1/2(∇×σT+βuT−f)∥∥2)1/2. (4.2) The and are the finite element approximations in problems (2.3) and (3.5) respectively. With the locally recovered , the local error indicator and the global error estimator are defined in the same way as (4.1) and (4.2): ˜ηK=(∥∥μ−1/2(μ˜σT−∇×uT)∥∥2K+∥∥β−1/2(∇טσT+βuT−f)∥∥2K)1/2, (4.3) and ˜η=(∥∥μ−1/2(μ˜σT−∇×uT)∥∥2+∥∥β−1/2(∇טσT+βuT−f)∥∥2)1/2. (4.4) ###### Remark 4.1. In practice, does not have to be the finite element solution of a global problem. In the numerical computation, the Hiptmair-Xu multigrid preconditioner in [20] is used for discrete problem (3.5) with two multigrid V-cycles for each component of the vector Laplacian, and two multigrid V-cycles for the kernel part of the curl operator. The used to evaluate the estimator is the PCG iterate. The computational cost is the same order with computing the explicit residual based estimator in [3]. Generally speaking, to approximate the auxiliary problem, the same black-box solver for the original problem can be applied requiring minimum modifications. For example, if the BoomerAMG in hypre ([17, 22]) is used for the discretizations of the primary problem, then the user has to provide exactly the same discrete gradient matrix and vertex coordinates of the mesh, and in constructing the the HX preconditioner, the assembling routines for the vector Laplacian and scalar Laplacian matrices can be called twice with only the coefficients input switched. ###### Theorem 4.2. Locally, the indicator and both have the following efficiency bound Missing dimension or its units for \mskip (4.5) for all . The estimator and satisfy the following global upper bound Missing dimension or its units for \mskip (4.6) ###### Proof. Denote the true errors in the electric and magnetizing fields by e=u−uT,andE=σ−σT, respectively. It follows from (3.1), (3.2), and the triangle inequality that η2K = ∥∥μ1/2E−μ−1/2∇×e∥∥2K+∥∥β−1/2∇×E+β1/2e∥∥2K (4.7) ≤ (∥∥μ1/2E∥∥2K+∥∥μ−1/2∇×e∥∥2K+∥∥β−1/2∇×E∥∥2K+∥∥β1/2e∥∥2K) = (|||e|||2μ,β,K+|||E|||2β,μ,K), which implies the validity of (4.5) for . For , the exact same argument follows except by switching by locally recovered . To prove the global identity in (4.6), summing (4.7) over all gives η2 = ∥∥μ1/2E−μ−1/2∇×e∥∥2+∥∥β−1/2∇×E+β1/2e∥∥2 = Missing dimension or its units for \mskip Now, (4.6) follows from the fact that −(E,∇×e)+(∇×E,e)=0. Lastly, the global upper bound for the locally recovered follows from the fact that
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682206511497498, "perplexity": 514.9320812925124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00128.warc.gz"}
http://docplayer.net/13821460-Exam-2-is-at-7-pm-tomorrow-conflict-is-at-5-15-pm-in-151-loomis.html
# Exam 2 is at 7 pm tomorrow Conflict is at 5:15 pm in 151 Loomis Size: px Start display at page: Download "Exam 2 is at 7 pm tomorrow Conflict is at 5:15 pm in 151 Loomis" ## Transcription 1 * By request, but I m not vouching for these since I didn t write them Exam 2 is at 7 pm tomorrow Conflict is at 5:15 pm in 151 Loomis There are extra office hours today & tomorrow Lots of practice exams online Lots of optional smartphysics problems* This morning s HW deadline moved to Friday (80% deadline not moved still next Tuesday) Pay attention (lecture, webpage, , ) 2 Hour Exam 2 Center of Mass X cm x m i m i i Work Energy W NC KE PE Impulse-Momentum F t P 3 Two ice-skaters of mass 70 kg and 50 kg, each having an initial velocity of 10 m/s in the directions shown, collide and fall and slide across the ice together. The ice surface is horizontal & frictionless. 1. What is the speed of the skaters after the collision? A) V = 0 m/s B) V = 1.7 m/s C) V = 2.7 m/s D) V = 10 m/s E) V = 20 m/s 2. What is the angle q relative to the x axis that the two skaters travel after the collision? A) q = 41 degrees B) q = 28 degrees C) q = 25 degrees 4 A cart of mass M = 9 kg rolls without friction on a horizontal surface. It is attached through a freely pivoting initially-horizontal massless rod of length L to a ball of mass m = 3 kg. The system is initially at rest when the ball is released. The pendulum swings down and to the left, and at the bottom of its swing the ball is observed to have a velocity of 3.5 m/sec to the left. 3. Which of the following remain constant as the pendulum swings down? A) Horizontal component of the momentum of the ball B) Horizontal component of the momentum of the cart C) Horizontal component of the momentum of the ball + cart 5 A cart of mass M = 9 kg rolls without friction on a horizontal surface. It is attached through a freely pivoting initially-horizontal massless rod of length L to a ball of mass m = 3 kg. The system is initially at rest when the ball is released. The pendulum swings down and to the left, and at the bottom of its swing the ball is observed to have a velocity of 3.5 m/sec to the left. 4. What is the speed of the cart when the ball is at the bottom? A) V cart = 1.17 m/s B) V cart = 1.75 m/s C) V cart = 2.44 m/s D) V cart = 2.91 m/s E) V cart = 3.37 m/s 6 A cart of mass M = 9 kg rolls without friction on a horizontal surface. It is attached through a freely pivoting initially-horizontal massless rod of length L to a ball of mass m = 3 kg. The system is initially at rest when the ball is released. The pendulum swings down and to the left, and at the bottom of its swing the ball is observed to have a velocity of 3.5 m/sec to the left. 5. What is the length L of the pendulum? A) 0.59 m B) 0.83 m C) 1.18 m 7 A cart of mass M = 9 kg rolls without friction on a horizontal surface. It is attached through a freely pivoting initially-horizontal massless rod of length L to a ball of mass m = 3 kg. The system is initially at rest when the ball is released. The pendulum swings down and to the left, and at the bottom of its swing the ball is observed to have a velocity of 3.5 m/sec to the left. 6. How far to the right has the cart moved, when the ball is at the bottom? A) L/2 B) L/3 C) L/4 8 7. A comet of mass 10 9 kg is observed at a distance from the sun of 8 x m (mass of sun = 2 x kg) at a speed of m/s. Assuming no forces on it other than the sun's gravity, how fast will it be going when it is a distance of 2.25 x m from the sun? (The universal gravitational constant is G = 6.67x10-11 Nm 2 /kg 2 ) A) 12,900 m/s B) 18,300 m/s C) 23,700 m/s D) 29,200 m/s E) 33,800 m/s 9 8. Two fishermen, of masses 70 and 90 kg stand at opposite ends of their 20 meter boat. The boat (without fishermen) has a mass of 400 kg. There is no wind or current and the boat can move without friction on the water s surface. The 90 kg fisherman walks to the left end of the boat. How far has the boat moved when the fisherman reaches the left end? A) 5.8 m to the right B) 3.2 m to the right C) The boat does not move D) 3.2 m to left E) 5.8 m to left 10 9. The centers of three spheres having masses 1 kg, 2 kg, and 3 kg are placed at the corners of an equilateral triangle whose sides are each 1 meter long, as shown below. What is the horizontal position of the center of mass? A) x cm = 0.50 m B) x cm = 0.58 m C) x cm = 0.76 m y 3 kg 1 kg 1 m 2 kg x 11 An artillery shell of mass 20 kg is fired from a rail car which is initially at rest on a horizontal frictionless track. The combined mass of the car and cannon is 2000kg. As viewed by someone on the ground the shell moves with an initial speed of 300 m/s at an angle of 27 degrees above the horizontal and the rail car recoils to the right. 10. Relative to the ground, what is the speed of the rail car after the shell is fired? A) 1.36 m/s B) 2.67 m/s C) 3.00 m/s 12 An artillery shell of mass 20 kg is fired from a rail car which is initially at rest on a horizontal frictionless track. The combined mass of the car and cannon is 2000kg. As viewed by someone on the ground the shell moves with an initial speed of 300 m/s at an angle of 27 degrees above the horizontal and the rail car recoils to the right. 11. If the shell was accelerated through the cannon for a time of 0.03 seconds, what was the average force on the shell during this time? A) 200 N B) 2,300 N C) 15,000 N D) 90,000 N E) 200,000 N 13 A block of mass m = 1.8 kg starts at rest on a rough inclined plane a height H = 8m above the ground. It slides down the plane, across a frictionless horizontal floor, and then around a frictionless loop-the-loop of radius R = 2.0 m. On the floor the speed of the block is observed to be 11 m/s. 12. What is the work done by friction on the block as it slides down the inclined plane? A) -49 J B) -28 J C) -78 J D) -23 J E) -32 J 14 A block of mass m = 1.8 kg starts at rest on a rough inclined plane a height H = 8m above the ground. It slides down the plane, across a frictionless horizontal floor, and then around a frictionless loop-the-loop of radius R = 2.0 m. On the floor the speed of the block is observed to be 11 m/s. 13. What is the magnitude of the normal force exerted on the block at the top of the loop? A) 0 N B) 5.1 N C) 9.8 N D) 17.6 N E) 20.6 N 15 2.5 kg k=250 N/m h=1.5 m = 0.4 d = 0.50 m A 2.5 kg box is held released from rest 1.5 meters above the ground and slides down a frictionless ramp. It slides across a floor that is frictionless, except for a small section 0.5 meters wide that has a coefficient of kinetic friction of 0.4. At the left end, is a spring with spring constant 250 N/m. The box compresses the spring, and is accelerated back to the right. 14. What is the speed of the box at the bottom of the ramp? A) 5.4 m/s B) 3.7 m/s C) 2.8 m/s 16 2.5 kg k=250 N/m h=1.5 m = 0.4 d = 0.50 m A 2.5 kg box is held released from rest 1.5 meters above the ground and slides down a frictionless ramp. It slides across a floor that is frictionless, except for a small section 0.5 meters wide that has a coefficient of kinetic friction of 0.4. At the left end, is a spring with spring constant 250 N/m. The box compresses the spring, and is accelerated back to the right. 15. What is the maximum distance the spring is compressed by the box? A) 0.50 m B) 0.66 m C) 0.83 m D) 0.94 m E) 1.21 m 17 2.5 kg k=250 N/m h=1.5 m = 0.4 d = 0.50 m A 2.5 kg box is held released from rest 1.5 meters above the ground and slides down a frictionless ramp. It slides across a floor that is frictionless, except for a small section 0.5 meters wide that has a coefficient of kinetic friction of 0.4. At the left end, is a spring with spring constant 250 N/m. The box compresses the spring, and is accelerated back to the right. 16. What is the maximum height to which the box returns on the ramp? A) 1.1 m B) 1.3 m C) 1.5 m 18 In case 1 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball bounces off the box and after the collision the box is moving to the right with V 1 and the ball is moving to the left with speed v f. In case 2 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball sticks to the box and after the collision the box (with the ball stuck to it) is moving to the right with speed V 2. In both cases the box slides without friction. Assume all motion is horizontal. 17. In which case is the change in momentum of the ball the biggest? A) Case 1 B) Case 2 C) The change in momentum of the ball is the same in both cases. 19 In case 1 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball bounces off the box and after the collision the box is moving to the right with V 1 and the ball is moving to the left with speed v f. In case 2 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball sticks to the box and after the collision the box (with the ball stuck to it) is moving to the right with speed V 2. In both cases the box slides without friction. Assume all motion is horizontal. 18. Which of the following statements best describes V 1 and V 2? A) V 1 < V 2 B) V 1 = V 2 C) V 1 > V 2 20 In case 1 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball bounces off the box and after the collision the box is moving to the right with V 1 and the ball is moving to the left with speed v f. In case 2 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball sticks to the box and after the collision the box (with the ball stuck to it) is moving to the right with speed V 2. In both cases the box slides without friction. Assume all motion is horizontal. 19. In case 2 it is observed that V 2 = v 0 /3. What is the ratio of masses M/m? A) M/m = 1/2 B) M/m = 3/4 C) M/m = 4/3 D) M/m = 3/2 E) M/m = 2 21 In case 1 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball bounces off the box and after the collision the box is moving to the right with V 1 and the ball is moving to the left with speed v f. In case 2 a ball of mass m is thrown horizontally with speed v 0 at a stationary box of mass M. The ball sticks to the box and after the collision the box (with the ball stuck to it) is moving to the right with speed V 2. In both cases the box slides without friction. Assume all motion is horizontal. 20. In case 1 it is observed that v f = v 0 /2. What V 1 /v 0? A) V 1 /v 0 = 3M/2m B) V 1 /v 0 = 2M/3m C) V 1 /v 0 = 3m/2M D) V 1 /v 0 = 2m/3M E) V 1 /v 0 = m/2m 22 Two blocks of mass m A and m B are placed side by side on a frictionless horizontal table. At time t 0 both blocks are at rest and a constant force of the same magnitude is applied to each of the blocks. Block A has a smaller mass than block B (m A < m B ). 21. How do the momenta of the two blocks compare 5 seconds after t 0? A) p A < p B B) p A > p B C) p A = p B 22. How do the kinetic energies of the two blocks compare 5 seconds after t 0? A) K A < K B B) K A > K B C) K A = K B 23 Two blocks of mass m A and m B are placed side by side on a frictionless horizontal table. At time t 0 both blocks are at rest and a constant force of the same magnitude is applied to each of the blocks. Block A has a smaller mass than block B (m A < m B ). 23. After each block has traveled the distance of 1 m, which is correct? A) p A = p B B) K A = K B C) Both of the above ### Physics 125 Practice Exam #3 Chapters 6-7 Professor Siegel Physics 125 Practice Exam #3 Chapters 6-7 Professor Siegel Name: Lab Day: 1. A concrete block is pulled 7.0 m across a frictionless surface by means of a rope. The tension in the rope is 40 N; and the ### C B A T 3 T 2 T 1. 1. What is the magnitude of the force T 1? A) 37.5 N B) 75.0 N C) 113 N D) 157 N E) 192 N Three boxes are connected by massless strings and are resting on a frictionless table. Each box has a mass of 15 kg, and the tension T 1 in the right string is accelerating the boxes to the right at a ### 9. The kinetic energy of the moving object is (1) 5 J (3) 15 J (2) 10 J (4) 50 J 1. If the kinetic energy of an object is 16 joules when its speed is 4.0 meters per second, then the mass of the objects is (1) 0.5 kg (3) 8.0 kg (2) 2.0 kg (4) 19.6 kg Base your answers to questions 9 ### PHY231 Section 2, Form A March 22, 2012. 1. Which one of the following statements concerning kinetic energy is true? 1. Which one of the following statements concerning kinetic energy is true? A) Kinetic energy can be measured in watts. B) Kinetic energy is always equal to the potential energy. C) Kinetic energy is always ### 10.1 Quantitative. Answer: A Var: 50+ Chapter 10 Energy and Work 10.1 Quantitative 1) A child does 350 J of work while pulling a box from the ground up to his tree house with a rope. The tree house is 4.8 m above the ground. What is the mass ### PHY231 Section 1, Form B March 22, 2012 1. A car enters a horizontal, curved roadbed of radius 50 m. The coefficient of static friction between the tires and the roadbed is 0.20. What is the maximum speed with which the car can safely negotiate ### Lecture 07: Work and Kinetic Energy. Physics 2210 Fall Semester 2014 Lecture 07: Work and Kinetic Energy Physics 2210 Fall Semester 2014 Announcements Schedule next few weeks: 9/08 Unit 3 9/10 Unit 4 9/15 Unit 5 (guest lecturer) 9/17 Unit 6 (guest lecturer) 9/22 Unit 7, ### Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam INSTRUCTIONS: Use a pencil #2 to fill your scantron. Write your code number and bubble it in under "EXAM NUMBER;" an entry ### SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Exam Name SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. 1) A person on a sled coasts down a hill and then goes over a slight rise with speed 2.7 m/s. ### AP Physics C Fall Final Web Review Name: Class: _ Date: _ AP Physics C Fall Final Web Review Multiple Choice Identify the choice that best completes the statement or answers the question. 1. On a position versus time graph, the slope of ### F N A) 330 N 0.31 B) 310 N 0.33 C) 250 N 0.27 D) 290 N 0.30 E) 370 N 0.26 Physics 23 Exam 2 Spring 2010 Dr. Alward Page 1 1. A 250-N force is directed horizontally as shown to push a 29-kg box up an inclined plane at a constant speed. Determine the magnitude of the normal force, ### Ph\sics 2210 Fall 2012 - Novcmbcr 21 David Ailion Ph\sics 2210 Fall 2012 - Novcmbcr 21 David Ailion Unid: Discussion T A: Bryant Justin Will Yuan 1 Place answers in box provided for each question. Specify units for each answer. Circle correct answer(s) ### P211 Midterm 2 Spring 2004 Form D 1. An archer pulls his bow string back 0.4 m by exerting a force that increases uniformly from zero to 230 N. The equivalent spring constant of the bow is: A. 115 N/m B. 575 N/m C. 1150 N/m D. 287.5 N/m ### Work, Energy & Momentum Homework Packet Worksheet 1: This is a lot of work! Work, Energy & Momentum Homework Packet Worksheet 1: This is a lot of work! 1. A student holds her 1.5-kg psychology textbook out of a second floor classroom window until her arm is tired; then she releases ### AP Physics - Chapter 8 Practice Test AP Physics - Chapter 8 Practice Test Multiple Choice Identify the choice that best completes the statement or answers the question. 1. A single conservative force F x = (6.0x 12) N (x is in m) acts on ### Problem Set #8 Solutions MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department 8.01L: Physics I November 7, 2015 Prof. Alan Guth Problem Set #8 Solutions Due by 11:00 am on Friday, November 6 in the bins at the intersection ### B) 286 m C) 325 m D) 367 m Answer: B Practice Midterm 1 1) When a parachutist jumps from an airplane, he eventually reaches a constant speed, called the terminal velocity. This means that A) the acceleration is equal to g. B) the force of ### HW Set VI page 1 of 9 PHYSICS 1401 (1) homework solutions HW Set VI page 1 of 9 10-30 A 10 g bullet moving directly upward at 1000 m/s strikes and passes through the center of mass of a 5.0 kg block initially at rest (Fig. 10-33 ). The bullet emerges from the ### Chapter 9. particle is increased. Chapter 9 9. Figure 9-36 shows a three particle system. What are (a) the x coordinate and (b) the y coordinate of the center of mass of the three particle system. (c) What happens to the center of mass ### KE =? v o. Page 1 of 12 Page 1 of 12 CTEnergy-1. A mass m is at the end of light (massless) rod of length R, the other end of which has a frictionless pivot so the rod can swing in a vertical plane. The rod is initially horizontal ### Work, Energy and Power Practice Test 1 Name: ate: 1. How much work is required to lift a 2-kilogram mass to a height of 10 meters?. 5 joules. 20 joules. 100 joules. 200 joules 5. ar and car of equal mass travel up a hill. ar moves up the hill ### 8. Potential Energy and Conservation of Energy Potential Energy: When an object has potential to have work done on it, it is said to have potential 8. Potential Energy and Conservation of Energy Potential Energy: When an object has potential to have work done on it, it is said to have potential energy, e.g. a ball in your hand has more potential energy ### AP Physics 1 Midterm Exam Review AP Physics 1 Midterm Exam Review 1. The graph above shows the velocity v as a function of time t for an object moving in a straight line. Which of the following graphs shows the corresponding displacement ### Midterm Solutions. mvr = ω f (I wheel + I bullet ) = ω f 2 MR2 + mr 2 ) ω f = v R. 1 + M 2m Midterm Solutions I) A bullet of mass m moving at horizontal velocity v strikes and sticks to the rim of a wheel a solid disc) of mass M, radius R, anchored at its center but free to rotate i) Which of ### Practice Exam Three Solutions MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Physics Physics 8.01T Fall Term 2004 Practice Exam Three Solutions Problem 1a) (5 points) Collisions and Center of Mass Reference Frame In the lab frame, ### Exam Three Momentum Concept Questions Exam Three Momentum Concept Questions Isolated Systems 4. A car accelerates from rest. In doing so the absolute value of the car's momentum changes by a certain amount and that of the Earth changes by: ### Chapter 7: Momentum and Impulse Chapter 7: Momentum and Impulse 1. When a baseball bat hits the ball, the impulse delivered to the ball is increased by A. follow through on the swing. B. rapidly stopping the bat after impact. C. letting ### Chapter 6 Work and Energy Chapter 6 WORK AND ENERGY PREVIEW Work is the scalar product of the force acting on an object and the displacement through which it acts. When work is done on or by a system, the energy of that system ### Lab 8: Ballistic Pendulum Lab 8: Ballistic Pendulum Equipment: Ballistic pendulum apparatus, 2 meter ruler, 30 cm ruler, blank paper, carbon paper, masking tape, scale. Caution In this experiment a steel ball is projected horizontally ### Chapter 4. Forces and Newton s Laws of Motion. continued Chapter 4 Forces and Newton s Laws of Motion continued Clicker Question 4.3 A mass at rest on a ramp. How does the friction between the mass and the table know how much force will EXACTLY balance the gravity ### Physics: Principles and Applications, 6e Giancoli Chapter 4 Dynamics: Newton's Laws of Motion Physics: Principles and Applications, 6e Giancoli Chapter 4 Dynamics: Newton's Laws of Motion Conceptual Questions 1) Which of Newton's laws best explains why motorists should buckle-up? A) the first law ### PHYS 211 FINAL FALL 2004 Form A 1. Two boys with masses of 40 kg and 60 kg are holding onto either end of a 10 m long massless pole which is initially at rest and floating in still water. They pull themselves along the pole toward each ### Problem Set 1. Ans: a = 1.74 m/s 2, t = 4.80 s Problem Set 1 1.1 A bicyclist starts from rest and after traveling along a straight path a distance of 20 m reaches a speed of 30 km/h. Determine her constant acceleration. How long does it take her to ### At the skate park on the ramp At the skate park on the ramp 1 On the ramp When a cart rolls down a ramp, it begins at rest, but starts moving downward upon release covers more distance each second When a cart rolls up a ramp, it rises ### PHY121 #8 Midterm I 3.06.2013 PHY11 #8 Midterm I 3.06.013 AP Physics- Newton s Laws AP Exam Multiple Choice Questions #1 #4 1. When the frictionless system shown above is accelerated by an applied force of magnitude F, the tension ### A uranium nucleus (at rest) undergoes fission and splits into two fragments, one heavy and the other light. Which fragment has the greater speed? A uranium nucleus (at rest) undergoes fission and splits into two fragments, one heavy and the other light. Which fragment has the greater speed? 1 2 PHYS 1021: Chap. 9, Pg 2 Page 1 1 A uranium nucleus ### Name Period WORKSHEET: KINETIC AND POTENTIAL ENERGY PROBLEMS. 1. Stored energy or energy due to position is known as energy. Name Period Date WORKSHEET: KINETIC AND POTENTIAL ENERGY PROBLEMS 1. Stored energy or energy due to position is known as energy. 2. The formula for calculating potential energy is. 3. The three factors ### Curso2012-2013 Física Básica Experimental I Cuestiones Tema IV. Trabajo y energía. 1. A body of mass m slides a distance d along a horizontal surface. How much work is done by gravity? A) mgd B) zero C) mgd D) One cannot tell from the given information. E) None of these is correct. 2. ### Chapter 5 Using Newton s Laws: Friction, Circular Motion, Drag Forces. Copyright 2009 Pearson Education, Inc. Chapter 5 Using Newton s Laws: Friction, Circular Motion, Drag Forces Units of Chapter 5 Applications of Newton s Laws Involving Friction Uniform Circular Motion Kinematics Dynamics of Uniform Circular ### PHYSICS 111 HOMEWORK SOLUTION #10. April 8, 2013 PHYSICS HOMEWORK SOLUTION #0 April 8, 203 0. Find the net torque on the wheel in the figure below about the axle through O, taking a = 6.0 cm and b = 30.0 cm. A torque that s produced by a force can be ### CHAPTER 6 WORK AND ENERGY CHAPTER 6 WORK AND ENERGY CONCEPTUAL QUESTIONS. REASONING AND SOLUTION The work done by F in moving the box through a displacement s is W = ( F cos 0 ) s= Fs. The work done by F is W = ( F cos θ). s From ### WORKSHEET: KINETIC AND POTENTIAL ENERGY PROBLEMS WORKSHEET: KINETIC AND POTENTIAL ENERGY PROBLEMS 1. Stored energy or energy due to position is known as Potential energy. 2. The formula for calculating potential energy is mgh. 3. The three factors that ### Physics Notes Class 11 CHAPTER 5 LAWS OF MOTION 1 P a g e Inertia Physics Notes Class 11 CHAPTER 5 LAWS OF MOTION The property of an object by virtue of which it cannot change its state of rest or of uniform motion along a straight line its own, is ### Tennessee State University Tennessee State University Dept. of Physics & Mathematics PHYS 2010 CF SU 2009 Name 30% Time is 2 hours. Cheating will give you an F-grade. Other instructions will be given in the Hall. MULTIPLE CHOICE. ### Physics 1A Lecture 10C Physics 1A Lecture 10C "If you neglect to recharge a battery, it dies. And if you run full speed ahead without stopping for water, you lose momentum to finish the race. --Oprah Winfrey Static Equilibrium ### Name: Partners: Period: Coaster Option: 1. In the space below, make a sketch of your roller coaster. 1. In the space below, make a sketch of your roller coaster. 2. On your sketch, label different areas of acceleration. Put a next to an area of negative acceleration, a + next to an area of positive acceleration, ### 2.1 Force and Motion Kinematics looks at velocity and acceleration without reference to the cause of the acceleration. 2.1 Force and Motion Kinematics looks at velocity and acceleration without reference to the cause of the acceleration. Dynamics looks at the cause of acceleration: an unbalanced force. Isaac Newton was ### Work-Energy Bar Charts Name: Work-Energy Bar Charts Read from Lesson 2 of the Work, Energy and Power chapter at The Physics Classroom: http://www.physicsclassroom.com/class/energy/u5l2c.html MOP Connection: Work and Energy: ### Physics 201 Homework 8 Physics 201 Homework 8 Feb 27, 2013 1. A ceiling fan is turned on and a net torque of 1.8 N-m is applied to the blades. 8.2 rad/s 2 The blades have a total moment of inertia of 0.22 kg-m 2. What is the ### B Answer: neither of these. Mass A is accelerating, so the net force on A must be non-zero Likewise for mass B. CTA-1. An Atwood's machine is a pulley with two masses connected by a string as shown. The mass of object A, m A, is twice the mass of object B, m B. The tension T in the string on the left, above mass ### Chapter 4 Dynamics: Newton s Laws of Motion Chapter 4 Dynamics: Newton s Laws of Motion Units of Chapter 4 Force Newton s First Law of Motion Mass Newton s Second Law of Motion Newton s Third Law of Motion Weight the Force of Gravity; and the Normal ### circular motion & gravitation physics 111N circular motion & gravitation physics 111N uniform circular motion an object moving around a circle at a constant rate must have an acceleration always perpendicular to the velocity (else the speed would ### Chapter 07 Test A. Name: Class: Date: Multiple Choice Identify the choice that best completes the statement or answers the question. Class: Date: Chapter 07 Test A Multiple Choice Identify the choice that best completes the statement or answers the question. 1. An example of a vector quantity is: a. temperature. b. length. c. velocity. ### Name per due date mail box Name per due date mail box Rolling Momentum Lab (1 pt for complete header) Today in lab, we will be experimenting with momentum and measuring the actual force of impact due to momentum of several rolling ### AP Physics Circular Motion Practice Test B,B,B,A,D,D,C,B,D,B,E,E,E, 14. 6.6m/s, 0.4 N, 1.5 m, 6.3m/s, 15. 12.9 m/s, 22.9 m/s AP Physics Circular Motion Practice Test B,B,B,A,D,D,C,B,D,B,E,E,E, 14. 6.6m/s, 0.4 N, 1.5 m, 6.3m/s, 15. 12.9 m/s, 22.9 m/s Answer the multiple choice questions (2 Points Each) on this sheet with capital ### AP1 Oscillations. 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false? 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false? (A) The displacement is directly related to the acceleration. (B) The ### Steps to Solving Newtons Laws Problems. Mathematical Analysis With Newtons Laws similar to projectiles (x y) isolation Steps to Solving Newtons Laws Problems. 1) FBD 2) Axis 3) Components 4) Fnet (x) (y) 5) Subs 1 Visual Samples F 4 1) F 3 F ### Exercises on Work, Energy, and Momentum. A B = 20(10)cos98 A B 28 Exercises on Work, Energy, and Momentum Exercise 1.1 Consider the following two vectors: A : magnitude 20, direction 37 North of East B : magnitude 10, direction 45 North of West Find the scalar product ### Lesson 3 - Understanding Energy (with a Pendulum) Lesson 3 - Understanding Energy (with a Pendulum) Introduction This lesson is meant to introduce energy and conservation of energy and is a continuation of the fundamentals of roller coaster engineering. ### WORK DONE BY A CONSTANT FORCE WORK DONE BY A CONSTANT FORCE The definition of work, W, when a constant force (F) is in the direction of displacement (d) is W = Fd SI unit is the Newton-meter (Nm) = Joule, J If you exert a force of ### Acceleration due to Gravity Acceleration due to Gravity 1 Object To determine the acceleration due to gravity by different methods. 2 Apparatus Balance, ball bearing, clamps, electric timers, meter stick, paper strips, precision ### 8. As a cart travels around a horizontal circular track, the cart must undergo a change in (1) velocity (3) speed (2) inertia (4) weight 1. What is the average speed of an object that travels 6.00 meters north in 2.00 seconds and then travels 3.00 meters east in 1.00 second? 9.00 m/s 3.00 m/s 0.333 m/s 4.24 m/s 2. What is the distance traveled ### MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. 1) Vector A has length 4 units and directed to the north. Vector B has length 9 units and is directed ### Chapter #7 Giancoli 6th edition Problem Solutions Chapter #7 Giancoli 6th edition Problem Solutions ü Problem #8 QUESTION: A 9300 kg boxcar traveling at 5.0 m/s strikes a second boxcar at rest. The two stick together and move off with a speed of 6.0 m/s. ### Friction and Newton s 3rd law Lecture 4 Friction and Newton s 3rd law Pre-reading: KJF 4.8 Frictional Forces Friction is a force exerted by a surface. The frictional force is always parallel to the surface Due to roughness of both ### 1. Newton s Laws of Motion and their Applications Tutorial 1 1. Newton s Laws of Motion and their Applications Tutorial 1 1.1 On a planet far, far away, an astronaut picks up a rock. The rock has a mass of 5.00 kg, and on this particular planet its weight is 40.0 ### Weight The weight of an object is defined as the gravitational force acting on the object. Unit: Newton (N) Gravitational Field A gravitational field as a region in which an object experiences a force due to gravitational attraction Gravitational Field Strength The gravitational field strength at a point in ### L-9 Conservation of Energy, Friction and Circular Motion. Kinetic energy. conservation of energy. Potential energy. Up and down the track L-9 Conseration of Energy, Friction and Circular Motion Kinetic energy, potential energy and conseration of energy What is friction and what determines how big it is? Friction is what keeps our cars moing ### b. Velocity tells you both speed and direction of an object s movement. Velocity is the change in position divided by the change in time. I. What is Motion? a. Motion - is when an object changes place or position. To properly describe motion, you need to use the following: 1. Start and end position? 2. Movement relative to what? 3. How far ### Homework #8 203-1-1721 Physics 2 for Students of Mechanical Engineering. Part A Homework #8 203-1-1721 Physics 2 for Students of Mechanical Engineering Part A 1. Four particles follow the paths shown in Fig. 32-33 below as they pass through the magnetic field there. What can one conclude ### Kinetic Energy (A) stays the same stays the same (B) increases increases (C) stays the same increases (D) increases stays the same. 1. A cart full of water travels horizontally on a frictionless track with initial velocity v. As shown in the diagram, in the back wall of the cart there is a small opening near the bottom of the wall ### Physics 590 Homework, Week 6 Week 6, Homework 1 Physics 590 Homework, Week 6 Week 6, Homework 1 Prob. 6.1.1 A descent vehicle landing on the moon has a vertical velocity toward the surface of the moon of 35 m/s. At the same time it has a horizontal ### Physics 126 Practice Exam #3 Professor Siegel Physics 126 Practice Exam #3 Professor Siegel Name: Lab Day: 1. Which one of the following statements concerning the magnetic force on a charged particle in a magnetic field is true? A) The magnetic force ### Chapter 3.8 & 6 Solutions Chapter 3.8 & 6 Solutions P3.37. Prepare: We are asked to find period, speed and acceleration. Period and frequency are inverses according to Equation 3.26. To find speed we need to know the distance traveled ### Chapter 4. Forces and Newton s Laws of Motion. continued Chapter 4 Forces and Newton s Laws of Motion continued 4.9 Static and Kinetic Frictional Forces When an object is in contact with a surface forces can act on the objects. The component of this force acting ### Physics 1401 - Exam 2 Chapter 5N-New Physics 1401 - Exam 2 Chapter 5N-New 2. The second hand on a watch has a length of 4.50 mm and makes one revolution in 60.00 s. What is the speed of the end of the second hand as it moves in uniform circular ### Chapter 9. is gradually increased, does the center of mass shift toward or away from that particle or does it remain stationary. Chapter 9 9.2 Figure 9-37 shows a three particle system with masses m 1 3.0 kg, m 2 4.0 kg, and m 3 8.0 kg. The scales are set by x s 2.0 m and y s 2.0 m. What are (a) the x coordinate and (b) the y coordinate ### Chapter 11. h = 5m. = mgh + 1 2 mv 2 + 1 2 Iω 2. E f. = E i. v = 4 3 g(h h) = 4 3 9.8m / s2 (8m 5m) = 6.26m / s. ω = v r = 6. Chapter 11 11.7 A solid cylinder of radius 10cm and mass 1kg starts from rest and rolls without slipping a distance of 6m down a house roof that is inclined at 30 degrees (a) What is the angular speed ### Fundamental Mechanics: Supplementary Exercises Phys 131 Fall 2015 Fundamental Mechanics: Supplementary Exercises 1 Motion diagrams: horizontal motion A car moves to the right. For an initial period it slows down and after that it speeds up. Which of ### Physics 41 HW Set 1 Chapter 15 Physics 4 HW Set Chapter 5 Serway 8 th OC:, 4, 7 CQ: 4, 8 P: 4, 5, 8, 8, 0, 9,, 4, 9, 4, 5, 5 Discussion Problems:, 57, 59, 67, 74 OC CQ P: 4, 5, 8, 8, 0, 9,, 4, 9, 4, 5, 5 Discussion Problems:, 57, 59, ### VELOCITY, ACCELERATION, FORCE VELOCITY, ACCELERATION, FORCE velocity Velocity v is a vector, with units of meters per second ( m s ). Velocity indicates the rate of change of the object s position ( r ); i.e., velocity tells you how ### MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Exam Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Solve the problem. (Use g = 9.8 m/s2.) 1) A 21 kg box must be slid across the floor. If ### AP Physics C. Oscillations/SHM Review Packet AP Physics C Oscillations/SHM Review Packet 1. A 0.5 kg mass on a spring has a displacement as a function of time given by the equation x(t) = 0.8Cos(πt). Find the following: a. The time for one complete ### PHYSICS 111 HOMEWORK SOLUTION #8. March 24, 2013 PHYSICS 111 HOMEWORK SOLUTION #8 March 24, 2013 0.1 A particle of mass m moves with momentum of magnitude p. a) Show that the kinetic energy of the particle is: K = p2 2m (Do this on paper. Your instructor ### Sample Questions for the AP Physics 1 Exam Sample Questions for the AP Physics 1 Exam Sample Questions for the AP Physics 1 Exam Multiple-choice Questions Note: To simplify calculations, you may use g 5 10 m/s 2 in all problems. Directions: Each ### Review Assessment: Lec 02 Quiz COURSES > PHYSICS GUEST SITE > CONTROL PANEL > 1ST SEM. QUIZZES > REVIEW ASSESSMENT: LEC 02 QUIZ Review Assessment: Lec 02 Quiz Name: Status : Score: Instructions: Lec 02 Quiz Completed 20 out of 100 points ### LAB 6 - GRAVITATIONAL AND PASSIVE FORCES L06-1 Name Date Partners LAB 6 - GRAVITATIONAL AND PASSIVE FORCES OBJECTIVES And thus Nature will be very conformable to herself and very simple, performing all the great Motions of the heavenly Bodies ### Chapter 7 Momentum and Impulse Chapter 7 Momentum and Impulse Collisions! How can we describe the change in velocities of colliding football players, or balls colliding with bats?! How does a strong force applied for a very short time ### Potential / Kinetic Energy Remedial Exercise Potential / Kinetic Energy Remedial Exercise This Conceptual Physics exercise will help you in understanding the Law of Conservation of Energy, and its application to mechanical collisions. Exercise Roles: ### BHS Freshman Physics Review. Chapter 2 Linear Motion Physics is the oldest science (astronomy) and the foundation for every other science. BHS Freshman Physics Review Chapter 2 Linear Motion Physics is the oldest science (astronomy) and the foundation for every other science. Galileo (1564-1642): 1 st true scientist and 1 st person to use ### STUDY PACKAGE. Available Online : www.mathsbysuhag.com fo/u fopkjr Hkh# tu] ugha vkjehks dke] foifr ns[k NksM+s rqjar e/;e eu dj ';kea iq#"k flag ladyi dj] lgrs foifr vusd] ^cuk^ u NksM+s /;s; dks] j?kqcj jk[ks VsdAA jfpr% ekuo /kez iz.ksrk ln~xq# Jh j.knksm+nklth ### On Quiz: Change #2 to 9/23 Wednesday, October 1 (!) Objective: SWBAT read and understand speed graphs 1. Pick up something 2. Write HW: Get Test Signed! Everybody!! 3. Warm up quiz today! Folders between you! On Quiz: Change #2 ### Unit 3 Work and Energy Suggested Time: 25 Hours Unit 3 Work and Energy Suggested Time: 25 Hours PHYSICS 2204 CURRICULUM GUIDE 55 DYNAMICS Work and Energy Introduction When two or more objects are considered at once, a system is involved. To make sense ### 226 Chapter 15: OSCILLATIONS Chapter 15: OSCILLATIONS 1. In simple harmonic motion, the restoring force must be proportional to the: A. amplitude B. frequency C. velocity D. displacement E. displacement squared 2. An oscillatory motion ### KE = ½mv 2 PE = mgh W = Fdcosθ THINK ENERGY! (KE F + PE F ) = (KE 0 + PE 0 ) + W NC. Tues Oct 6 Assign 7 Fri Pre-class Thursday Tues Oct 6 Assign 7 Fri Pre-class Thursday Conservation of Energy Work, KE, PE, Mech Energy Power To conserve total energy means that the total energy is constant or stays the same. With Work, we now have ### If you put the same book on a tilted surface the normal force will be less. The magnitude of the normal force will equal: N = W cos θ Experiment 4 ormal and Frictional Forces Preparation Prepare for this week's quiz by reviewing last week's experiment Read this week's experiment and the section in your textbook dealing with normal forces ### 8.012 Physics I: Classical Mechanics Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 8.012 Physics I: Classical Mechanics Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS INSTITUTE ### QUESTIONS : CHAPTER-5: LAWS OF MOTION QUESTIONS : CHAPTER-5: LAWS OF MOTION 1. What is Aristotle s fallacy? 2. State Aristotlean law of motion 3. Why uniformly moving body comes to rest? 4. What is uniform motion? 5. Who discovered Aristotlean ### Practice Test SHM with Answers Practice Test SHM with Answers MPC 1) If we double the frequency of a system undergoing simple harmonic motion, which of the following statements about that system are true? (There could be more than one
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269941806793213, "perplexity": 1091.4478822734454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320593.91/warc/CC-MAIN-20170625221343-20170626001343-00421.warc.gz"}
https://www.physicsforums.com/threads/forces-on-a-steel-ball-of-mass-question.235774/
# Forces on a steel ball of mass Question 1. May 18, 2008 ### looi76 1. The problem statement, all variables and given/known data A steel ball of mass $$73g$$ is held above a horizontal steel plate, as illustrated in the figure below. http://img59.imageshack.us/img59/3917/physicsp24bqd3.png [Broken] The ball is dropped from rest and it bounces on the plate, reaching a height h. (a) Calculate the speed of ball as it reaches the plate. (b) As the ball loses contact with the plate after bouncing, the kinetic energy of the ball is $$90\%$$ of that just before bouncing. Calculate (i) the height h to which the ball bounces. (ii) the speed of the ball as it leaves the plate after bouncing. (c) Using your answers to (a) and (b), determine the change in momentum of the ball during the bounce. 2. Relevant equations $$K.E = \frac{1}{2}mv^2$$ $$P.E = mgh$$ 3. The attempt at a solution (a) $$h = 1.6m \ , \ g = 9.81 ms^{-2}$$ $$\frac{1}{2}\not{m}v^2 = \not{m}gh$$ $$v = \sqrt{2gh}$$ $$v = \sqrt{2 \times 9.81 \times 1.6}$$ $$v = 5.6 ms^{-1}$$ (b)(i) Don't know how to solve this this question... need help! Last edited by a moderator: May 3, 2017 2. May 18, 2008 ### danago For b, consider the laws of conservation of energy. You can calculate the amount of mechanical energy in the ball before it bounces by considering both its velocity and height at some stage during the motion. Once you have found the amount of energy in the ball, and then after taking into consideration the 10% loss due to the bounce, you can use the same idea of conservation of energy to find the height to which it bounces, remembering that at the top of the bounce, the ball has a speed of zero. Similar Discussions: Forces on a steel ball of mass Question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815377950668335, "perplexity": 401.86964846686556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424148.78/warc/CC-MAIN-20170722202635-20170722222635-00113.warc.gz"}
https://nbviewer.jupyter.org/gist/anonymous/cb53d06b837be97ebe32
# The Statistics of Buterin's Random Circuit Protocol¶ Matthew Wampler-Doty Abstract. In this paper, the statistical behavior of of a mining protocol due to Vitalik Buterin (2014), dubbed Random Circuit, is investigated. It is revealed to follow a form of the geometric distribution. Several means of parameter estimation are demonstrated, and a statistical model for pooled mining is presented. Finally, a control mechanism for both average pooled mining time and variance is suggested. This opens the doorway to subsequent study of control protocols based on this mechanism. ## Overview¶ It is well known that BitCoin was designed so with a simple control mechanism for keeping issuance at a constant expected rate, based on a difficulty $D$. In the BitCoin design, a single miner can expect mining time to follow a geometric distribution. In fact, any pool of miners can also be modeled as following a geometric distribution in their mining time. This paper presents a generalization of the BitCoin mining protocol intended to provide relief to the issue of mining time variance. This is accomplished by looking at mining protocols governed by two difficulty variables, rather than just one. A generalization of the geometric distribution describing BitCoin is presented, and shown to faithfully capture the behavior of the new proposed mining protocols. Finally, we show that the new mining protocols provide a simple mechanism for controlling mining time variance to a desired value. ## Mining Protocol¶ In this section we present Vitalik Buterin's (2014) Random Circuit (RC) mining protocol. The system is admittedly rather complicated; this is because its original intent was to thwart an easy implementation in an ASIC. We have modified it slightly to reuse our bitcoin_hash helper function where appropriate, and to have the same two parameter difficulty API we provided for IterCoin. The idea behind RC is to run a random O(S) computation requiring O(S) space. In [1]: from random import randint import hashlib return int('0x' + hashlib.sha256(hashlib.sha256(str(block_header + nonce)).digest()).hexdigest(), 16) max_val = 2**256 class SeedObj(): "A class for managing a random number state, using a linear congruential generator" def __init__(self, seed): self.seed = seed self.a = 3**160 self.c = 7**80 self.n = 2**256 - 4294968273 # secp256k1n, why not def rand(self, r): self.seed = (self.seed * self.a + self.c) % self.n return self.seed % r def encode_int(x): "A helper function to provide a standard 64-bit big-endian encoding for numbers" o = '' for _ in range(8): o = chr(x % 256) + o x //= 256 return o ops = { "plus": lambda x,y: (x + y) % 2**64, "times": lambda x,y: (x * y) % 2**64, "xor": lambda x,y: x^y, "and": lambda x,y: x&y, "or": lambda x,y: x|y, "not": lambda x,y: 2**64-1-x, "nxor": lambda x,y: (2**64-1-x) ^ y, "rshift": lambda x,y: x >> (y % 64) } def gentape(W, H, SEED): "Generate a random tape of binary instructions" s = SeedObj(SEED) tape = [] for i in range(H): op = ops.keys()[s.rand(len(ops))] r = s.rand(100) if r < 20 and i > 20: x1 = tape[-r]["x1"] else: x1 = s.rand(W) x2 = s.rand(W) tape.append({"op": op, "x1": x1, "x2": x2}) return tape def runtape(TAPE, SEED, S): import hashlib s = SeedObj(SEED) mem = [s.rand(2**64) for _ in range(S)] dir = 1 for i in range(S // 100): for j in (range(100) if dir == 1 else range(99, -1, -1)): t = TAPE[i * 100 + j] mem[t["x1"]] = ops[t["op"]](mem[t["x1"]], mem[t["x2"]]) if 2 < mem[t["x1"]] % 37 < 9: dir *= -1 return int('0x' + hashlib.sha256(''.join(encode_int(x) for x in mem)).hexdigest(), 16) def PoWVerify(block_header, nonce, S, D): tape = gentape(S, S, bitcoin_hash(block_header, nonce)) h = runtape(tape, bitcoin_hash(block_header, nonce), S) return h < max_val / D def rc_mine(block_header, S, D): nonce = 1 while not PoWVerify(block_header, nonce, S, D): nonce += 1 return nonce # Example Run rc_mine(randint(0,max_val), 10, 10) Out[1]: 5 ## Statistical Behavior¶ ### Geometric Random Distributions¶ The statistical behavior of our two proposed two parameter difficulty systems can be understood using a geometric random distribution. Let $T$ be a random variable describing the mining time for one of our two algorithms. In either case, we may model it as following a CDF: $$\mathbb{P}[T < t] \approx 1 - (1 - 1/D)^{t / (\hat{C} S)}$$ Here $D$ reflects the conventional difficulty D parameter due to Nakamoto, $S$ reflects the number of steps S in itercoin and RC, and $\hat{C}$ represents the amount of time to compute a single step; we will call it the compute power. We can think of $T$ as having support on a discrete set of values $\hat{C} S \mathbb{Z}^+ = \{\hat{C} S, 2 \hat{C} S, 3 \hat{C} S, \ldots\}$. This reflects the observation that the mining process involves a discrete number of trials: either the miner successfully mines on their first try, or their second try, etc. This allows us to recover a conventional geometric distribution PDF: $$\mathbb{P}\left[ T / (\hat{C} S) = k\right] = \left(1 - \frac{1}{D}\right)^k \frac{1}{D}$$ ### Median¶ The first statistic we investigate is the median. This is selected because it is well known that the geometric distribution converges to an exponential distribution as $D$ tends to infinity, and exponential distributions often have extreme outliars. Since the median tends to be robust to outliars, it should be expected to be more stable even for large values of $D$. Given our model, we can calculate the median for $T$ to be: $$\begin{eqnarray*} \operatorname{Median}[T] & \approx & -\frac{\hat{C}S}{\log_2(1-1/D)} \end{eqnarray*}$$ This suggests that the median should increase linearly with $S$. This is straightforward to test with Monte Carlo; we first give timing programs so we can test our mining protocols in vivo: In [2]: from timeit import timeit def time_rc_mine(S,D): return timeit('rc_mine(randint(0,max_val), %d, %d)' % (S,D), 'from random import randint ; from __main__ import rc_mine, max_val', number=1) We begin with looking at the behavior of Random Circuit's median mining time, keeping $D=2$ constant while varying $S$. We expect median mining time to increase linearly with $S$. Moreover, since $D$ is known, we can easily recover an estimate of $\hat{C}$: In [3]: rc_S_difficulties = range(10,370,50) rc_S_samples = [[time_rc_mine(S,2) for _ in range(1000)] for S in rc_S_difficulties] In [4]: %matplotlib inline %config InlineBackend.figure_format = 'svg' In [5]: import numpy as np from pylab import polyfit import matplotlib.pyplot as plt rc_S_medians = map(np.median, rc_S_samples) rc_S_medians_A, rc_S_medians_k = \ polyfit(rc_S_difficulties, rc_S_medians, 1) plt.plot(rc_S_difficulties, rc_S_medians) plt.plot(rc_S_difficulties, [rc_S_medians_A*S + rc_S_medians_k for S in rc_S_difficulties], '-r') plt.title('$S$ vs. Median Mining Time (Linear)') plt.ylabel('Seconds') plt.xlabel('$S$') plt.show() rc_S_medians_C = - rc_S_medians_A * np.log2(1-1./2) print "Estimated Ĉ:", rc_S_medians_C Estimated Ĉ: 2.22365345274e-05 The behavior with respect to $D$ is more complicated. Depending on how large $D$ is, different patterns of behavior emerge. If $D$ is large, then $\log_2(1-1/D) \approx 0$, making it hard to recover $\hat{C}$ from median estimates. A reasonable limit approximation is available, however. Let $p=\frac{1}{D}$, this means that as $D$ tends to infinity then $p$ tends to 0. The Laurent series expansion of $-\frac{1}{\log_2(1-p)}$ around $p = 0$ is given by: $$\begin{eqnarray*} -\frac{1}{\log_2(1-p)} & = & -\frac{\log(2)}{2} + \frac{\log(2)}{p} + O(p) \\ & \approx & -\frac{\log(2)}{2} + \log(2) D \end{eqnarray*}$$ Hence for large $D$, we should expect $\operatorname{Median}[T] \propto \log(2) \hat{C} S D$. This implies empirically for a collection of samples with varying values of $D$ that $\hat{C}$ can be recovered via a simple linear regression. In [6]: rc_D_difficulties = range(1,502,50) rc_D_samples = [[time_rc_mine(1,D) for _ in range(1000)] for D in rc_D_difficulties] In [7]: rc_D_medians = map(np.median, rc_D_samples) plt.plot(rc_D_difficulties, rc_D_medians) rc_D_medians_A, rc_D_medians_k = \ polyfit(rc_D_difficulties, rc_D_medians, 1) plt.plot(rc_D_difficulties, [rc_D_medians_A*S + rc_D_medians_k for S in rc_D_difficulties], '-r') plt.title('$D$ vs. Median Mining Time (Linear)') plt.ylabel('Seconds') plt.xlabel('$D$') plt.show() rc_D_medians_C = rc_D_medians_A / np.log(2) print "Estimated Ĉ:" print " using S medians:", rc_S_medians_C print " using D medians:", rc_D_medians_C Estimated Ĉ: using S medians: 2.22365345274e-05 using D medians: 3.09285469223e-05 As we can see, the results of both estimations of $\hat{C}$ are similar. ### Mean and Variance¶ The next two statistics we investigate are the mean and variance. Given our model, these are given as follows: $$\begin{eqnarray*} \mathbb{E}[T] & = & \hat{C} S D \end{eqnarray*}$$ $$\begin{eqnarray*} \operatorname{Var}[T] & = & \hat{C}^2 S^2 (D^2 - D) \end{eqnarray*}$$ The mean $\mathbb{E}[T]$ is a linear function of both $S$ and $D$, respectively, and the variance $\operatorname{Var}[T]$ is a quadratic function of both $S$ and $D$. In every case another estimate of $\hat{C}$ can be recovered using regresion. In [8]: fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,5)) rc_S_means = map(np.mean, rc_S_samples) rc_D_means = map(np.mean, rc_D_samples) ax1.plot(rc_S_difficulties, rc_S_means) ax2.plot(rc_D_difficulties, rc_D_means) rc_S_vars = map(np.var, rc_S_samples) rc_D_vars = map(np.var, rc_D_samples) ax3.plot(rc_S_difficulties, rc_S_vars) ax4.plot(rc_D_difficulties, rc_D_vars) rc_S_means_A, rc_S_means_k = \ polyfit(rc_S_difficulties, rc_S_means, 1) ax1.plot(rc_S_difficulties, [rc_S_means_A*S + rc_S_means_k for S in rc_S_difficulties], '-r') rc_S_means_C = rc_S_means_A / 2 # Recall that we set D=2 for these samples rc_D_means_C, rc_D_means_k = \ polyfit(rc_D_difficulties, rc_D_means, 1) ax2.plot(rc_D_difficulties, [rc_D_means_C*D + rc_D_means_k for D in rc_D_difficulties], '-r') rc_S_vars_A, rc_S_vars_B, rc_S_vars_k = \ polyfit(rc_S_difficulties, rc_S_vars, 2) ax3.plot(rc_S_difficulties, [rc_S_vars_A*S*S + rc_S_vars_B*S + rc_S_vars_k for S in rc_S_difficulties], '-r') rc_S_vars_C = np.sqrt(rc_S_vars_A / 2) # Recall that we set D=2 for these samples rc_D_vars_A, rc_D_vars_B, rc_D_vars_k = \ polyfit(rc_D_difficulties, rc_D_vars, 2) ax4.plot(rc_D_difficulties, [rc_D_vars_A*D*D + rc_D_vars_B*D + rc_D_vars_k for D in rc_D_difficulties], '-r') rc_D_vars_C = np.sqrt(rc_D_vars_A) ax1.set_title('$S$ vs. Mean Mining Time (Linear)') ax1.set_ylabel('Seconds') ax1.set_xlabel('$S$') ax2.set_title('$D$ vs. Mean Mining Time (Linear)') ax2.set_ylabel('Seconds') ax2.set_xlabel('$D$') ax3.set_title('$S$ vs. Mining Time Variance (Quadratic)') ax3.set_ylabel('Seconds') ax3.set_xlabel('$S$') ax4.set_title('$D$ vs. Mining Time Variance (Quadratic)') ax4.set_ylabel('Seconds') ax4.set_xlabel('$D$') plt.show() print "Estimated Ĉ:" print " using S medians:", rc_S_medians_C print " using S means:", rc_S_means_C print " using S variance:", rc_S_vars_C print " using D medians:", rc_D_medians_C print " using D means:", rc_D_means_C print " using D variance:", rc_D_vars_C Estimated Ĉ: using S medians: 2.22365345274e-05 using S means: 1.14086116496e-05 using S variance: 1.30566190394e-05 using D medians: 3.09285469223e-05 using D means: 3.07988837849e-05 using D variance: 3.17806087131e-05 ### Statistical Hypothesis Testing¶ As we can see from above there are a variety of ways to estimate $\hat{C}$. From our estimate we can perform statistical hypothesis testing using the closed form cummulative distribution function and the Kolmogorov-Smirnov (KS) test, as in Wampler-Doty (2014). In [9]: import scipy.stats import pandas C = rc_S_medians_C S_data = [] for S, samples in zip(rc_S_difficulties, rc_S_samples): cdf = lambda t: 1 - (1 - (1. / 2))**(t / (S * C)) ks_statistic, p_value = scipy.stats.kstest(samples, cdf) S_data.append({"Difficulty D": 2, "Difficulty S": S, "KS Statistic": ks_statistic, "P-Value": p_value}) pandas.DataFrame.from_records(S_data) Out[9]: Difficulty D Difficulty S KS Statistic P-Value 0 2 10 0.377975 0 1 2 60 0.313981 0 2 2 110 0.314904 0 3 2 160 0.304511 0 4 2 210 0.302906 0 5 2 260 0.296714 0 6 2 310 0.298224 0 7 2 360 0.295487 0 In [10]: D_data = [] for D, samples in zip(rc_D_difficulties, rc_D_samples): cdf = lambda t: 1 - (1 - (1. / D))**(t / C) ks_statistic, p_value = scipy.stats.kstest(samples, cdf) D_data.append({"Difficulty D": D, "Difficulty S": 1, "KS Statistic": ks_statistic, "P-Value": p_value}) pandas.DataFrame.from_records(D_data) Out[10]: Difficulty D Difficulty S KS Statistic P-Value 0 1 1 1.000000 0.000000e+00 1 51 1 0.159877 0.000000e+00 2 101 1 0.165789 0.000000e+00 3 151 1 0.137124 0.000000e+00 4 201 1 0.130921 1.998401e-15 5 251 1 0.147080 0.000000e+00 6 301 1 0.108973 8.504752e-11 7 351 1 0.148628 0.000000e+00 8 401 1 0.110684 3.990630e-11 9 451 1 0.119436 6.905587e-13 10 501 1 0.147076 0.000000e+00 Lower P-values imply that the geometric distribution model is more likely to be correct. ### Mining Pool Statistics¶ We next turn to the statistics of mining pools. The following result effectively characterizes a mining pool as behaving like a single miner: Lemma: Fix $S$ and $D$. Let $\{T_i : i \in \mathcal{I}\}$ be a pool of miners, each with geometrically distributed mining times, and each with a respective compute power $\hat{C}_i$. Then $T_\min = \min_{i \in \mathcal{I}} T_i$ follows a geometric random distribution with CDF: $$\mathbb{P}[T_\min < t] \approx 1 - (1 - 1/D)^{x / (S \hat{C}_{pooled})}$$ With a pooled compute power $\hat{C}_{pooled} = \left({\sum_{i\in\mathcal{I}}\hat{C}_i^{-1}}\right)^{-1}$ Proof. We prove the result in the special case of just two miners, since the arbitrary case is a straightforward generalization. Let their compute powers be $\hat{C}_1$ and $\hat{C}_2$, respectively. We have that $$\mathbb{P}[T_\min < t] = 1 - \mathbb{P}[T_1 \geq t\ \&\ T_2 \geq t]$$ Assuming the two times are independent we have $$\begin{eqnarray*} \mathbb{P}[T_1 \geq t\ \&\ T_2 \geq t] & = & \mathbb{P}[T_1 \geq t]\mathbb{P}[ T_2 \geq t] \\ & = & (1 - 1/D)^{x / (S \hat{C}_{1})}(1 - 1/D)^{x / (S \hat{C}_{2})} \\ & = & (1 - 1/D)^{(x / S) (1/\hat{C}_{1} + 1/\hat{C}_{2})} \\ \end{eqnarray*}$$ We want $1/\hat{C}_{pooled} = 1/\hat{C}_{1} + 1/\hat{C}_{2}$, hence $$\hat{C}_{pooled} = (\hat{C}_{1}^{-1} + \hat{C}_{2}^{-1})^{-1}$$ $\Box$ It is helpful to look at special cases in understanding the above formula. Imagine that the pool consisted of $n$ miners each taking exactly 1 $\mu$s to mine a coin at $D = S = 1$. Then the pool would effectively be able to mine a coin at $1/n$ $\mu$s. Of course, the result holds even in extremely heterogeneous pools, it is just harder to understand the effective compute power with a simple closed form. One problem the approximation presented in the above lemma is that it neglects the fact that $\mathbb{P}[T < \min_{i\in\mathcal{I}}S \hat{C}_i] = 0$. Fundamentally, the compute pool cannot mine any faster than its fastest miner. In the classic BitCoin model this didn't matter, since effectively $S=1$ and $\min_{i\in\mathcal{I}}\hat{C}_i$ can be taken to be small. However, since Random Circuit has the option of enforcing $S$ to be arbitrarily large, this factor must be taken into account. ### Towards Variance Control¶ Together, mean and variance give rise to two equations with two free variables. This suggests an alternative approach to BitCoin's difficulty model, which relies entirely on controlling $\mathbb{E}[T]$. Random Circuit has the possibility of controlling both $\mathbb{E}[T_{pooled}]$ and $\operatorname{Var}[T_{pooled}]$. Let $\mu$ be the target average mining time, and let $\sigma^2$ be the target mining variance. Solving the mean and average equations above gives: $$\begin{eqnarray*} D & = & \frac{\mu^2}{\mu^2 - \sigma^2} \\ S & = & \frac{\mu^2 - \sigma^2}{\hat{C}_{pooled} \mu} \end{eqnarray*}$$ This is somewhat intuitive; demanding $\sigma = 0$ requires that $D = 1$, which means that the miner always succeeds on their first try and mining time is completely deterministic. By the same token if mining is completely deterministic then we can still achieve our expected mining time target by just calculating how many steps it would take to run out the clock, which is given by $\mu / \hat{C}$. The chief problem with this proposal is that for small values of $D$, this invalidates the exponential model that Nakamoto appeals to in his classic argument for the safety of the block chain protocol. Moreover, this likely invalidates BitCoin's technique of estimating $\hat{C}_{pooled}$ using an average. ## Conclusions¶ Random Circuit gives a well behaved statistical model, and has the novelty of offering not one but two parameters for difficulty control. More research is required for determining how Random Circuit's second difficulty parameter $S$ can be leveraged effectively in a rate control mechanism that does not compromise blockchain security. Preliminary results appear promising.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654038906097412, "perplexity": 3277.7986753718455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00249.warc.gz"}
http://mathhelpforum.com/calculus/146168-simple-limit-problem-print.html
# A simple limit problem. • May 23rd 2010, 07:21 PM guidol92 A simple limit problem. Hello. I have to find the limit as x approaches 0 of the function [ [(1/(3+x)] -(1/3) ] / x i tried direct substitution and to rationalize the denominator and numerator, but neither works... what shoul i do? • May 23rd 2010, 07:30 PM Prove It Quote: Originally Posted by guidol92 Hello. I have to find the limit as x approaches 0 of the function [ [(1/(3+x)] -(1/3) ] / x i tried direct substitution and to rationalize the denominator and numerator, but neither works... what shoul i do? $\frac{\frac{1}{3 + x} - \frac{1}{3}}{x} = \frac{\frac{3 - (3 + x)}{3(3 + x)}}{x}$ $= \frac{-\frac{x}{3(3 + x)}}{x}$ $= -\frac{1}{3(3 + x)}$. So $\lim_{x \to 0}\left(\frac{\frac{1}{3 + x} - \frac{1}{3}}{x}\right) = \lim_{x \to 0}\left[-\frac{1}{3(3 + x)}\right]$ $= -\frac{1}{3(3)}$ $= -\frac{1}{9}$. • May 24th 2010, 12:51 AM mr fantastic Quote: Originally Posted by guidol92 Hello. I have to find the limit as x approaches 0 of the function [ [(1/(3+x)] -(1/3) ] / x i tried direct substitution and to rationalize the denominator and numerator, but neither works... what shoul i do? It can also be recognised as having the same form as the derivative from first principles of f(t) = 1/t evaluated at t = 3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917951822280884, "perplexity": 1381.2936748073876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990123.20/warc/CC-MAIN-20150728002310-00028-ip-10-236-191-2.ec2.internal.warc.gz"}
http://community.teradata.com/t5/Database/Testing-environment-changes/td-p/53330
Database Enthusiast ## Testing environment changes Hi! I have a requirement where we need to replace the database names in the BTEQ script with the actual databse name depending on the enviroment. Example - 1. db1.tb1 should be replaced with 'USR.tb1' 2. db2.tb1 should be replaced with 'USER.tb1' . This is for one environment. In other environment we need to replace with other names. We are thinking of having all the database names and variables in one file and pass this file depending on the environment. Can this be done in teradata? Thanks! 2 REPLIES Fan ## Re: Testing environment changes I'm also looking for similar requirement. We are facing many challenges when moving from Oracle to TD, specially on these little things. Appreciate any ideas ??? Enthusiast ## Re: Testing environment changes What is the platform you are using to trgger teradata? Is it Unix or someting else? If unix you have a parameter 'hostname' which gives you the environment server on which it is trigerred. By that you can write your code.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367859363555908, "perplexity": 3327.411703475906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00507.warc.gz"}
https://support.google.com/docs/answer/2698842?hl=en-GB&topic=19435&parent=19431&rd=2
# Format text on a slide Google Slides makes it easy to change the font or color of text. When you first create a presentation in Google Slides, the default font is Arial. To change it, simply select a different font from the font drop-down menu in the toolbar. If you'd like to choose a different font for a particular section of the presentation, select the text you'd like to change, click the font menu, and select a font. The changes are applied to the selected text. At the top of your font list, you will see a section of your most recently used fonts. You can also add fonts to your font list if your language setting is one of the following languages: • Catalan • Danish • Dutch • English • Finnish • French • German • Italian • Norwegian • Portuguese 1. At the bottom of your font list, select More fonts. 2. A font picker will let you "shop" for web fonts for your font list. 3. Click a font to add it to your "My fonts" list. 4. Sort fonts in the list by using the Sort and Show drop-down menus, or use the search box to search for a specific font. 5. Click OK when you’re finished.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987129926681519, "perplexity": 3146.5663126301492}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278417.79/warc/CC-MAIN-20160524002118-00096-ip-10-185-217-139.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.aoms/1177704466
## The Annals of Mathematical Statistics ### Location and Scale Parameters in Exponential Families of Distributions Thomas S. Ferguson #### Abstract Location and scale parameters, on the one hand, and distributions admitting sufficient statistics for the parameters, on the other, have played a large role in the development of modern statistics. This paper deals with the problem of finding those distributions involved in the intersection of these two domains. In Sections 2 through 4 the preliminary definitions and lemmas are given. The main results found in Theorems 1 through 4 may be considered as a strengthening of the results of Dynkin [3] and Lindley [8]. Theorem 1 discovers the only possible forms assumed by the density of an exponential family of distributions having a location parameter. These forms were discovered by Dynkin under the superfluous assumptions that a density with respect to Lebesgue measure exist and have piecewise continuous derivatives of order one. Theorem 2 consists of the specialization of Theorem 1 to one-parameter exponential families of distributions. The resulting distributions, as found by Lindley, are either (1), the distributions of $(1/\gamma) \log X$, where $X$ has a gamma distribution and $\gamma \neq 0$, or (2), corresponding to the case $\gamma = 0$, normal distributions. In Theorem 3, the result analogous to Theorem 2 for scale parameters is stated. In Theorem 4, those $k$-parameter exponential families of distributions which contain both location and scale parameters are found. If the parameters of a two-parameter exponential family of distributions may be taken to be location and scale parameters, then the distributions must be normal. The final section contains a discussion of the family of distributions obtained from the distributions of Theorem 2 and their limits as $\gamma \rightarrow \pm \infty$. These limits are "non-regular" location parameter distributions admitting a complete sufficient statistic. This family of distributions is a main class of distributions to which Basu's theorem (on statistics independent of a complete sufficient statistic) applies. Furthermore, this family is seen to provide a natural setting in which to prove certain characterization theorems which have been proved separately for the normal and gamma distributions. Concluding the section is a theorem which, essentially, characterizes the gamma distribution by the maximum likelihood estimate of its scale parameter. #### Article information Source Ann. Math. Statist., Volume 33, Number 3 (1962), 986-1001. Dates First available in Project Euclid: 27 April 2007 https://projecteuclid.org/euclid.aoms/1177704466 Digital Object Identifier doi:10.1214/aoms/1177704466 Mathematical Reviews number (MathSciNet) MR141184 Zentralblatt MATH identifier 0109.37605 JSTOR
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921252429485321, "perplexity": 336.5634419303362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670731.88/warc/CC-MAIN-20191121050543-20191121074543-00118.warc.gz"}
https://era.library.ualberta.ca/items/2df5cef6-7a12-4636-b162-4bf57f2dd3d6
Usage • 8 views # AUTOMORPHISMS AND TWISTED FORMS OF DIFFERENTIAL LIE CONFORMAL SUPERALGEBRAS • Author / Creator Chang, Zhihua • Given a conformal superalgebra A over an algebraically closed field k of characteristic zero, a twisted loop conformal superalgebra L based on A has a differential conformal superalgebra structure over the differential Laurent polynomial ring D. In this context, L is a D_m/D–form of A \otimes D with respect to an étale extension of differential rings D_m/D, and hence is a \hat{D}/D–form of A. Such a perspective reduces the problem of classifying the twisted loop conformal superalgebras based on A to the computation of the non-abelian cohomology set of its automorphism group functor. The primary goal of this dissertation is to classify the twisted loop conformal superalgebras based on A when A is one of the N=1,2,3 and (small or large) N=4 conformal superalgebras. To achieve this, we first explicitly determined the automorphism group of the \hat{D}–conformal superalgebra A\otimes\hat{D} in each case. We then computed the corresponding non-abelian continuous cohomology set, and obtained the classification of our objects up to isomorphism over D. Finally, by applying the so-called “centroid trick”, we deduced from isomorphisms over D to isomorphisms over k, thus accomplishing the classification over k. Additionally, in order to understand the representability of the automorphism group functors of the N=1,2,3 and small N = 4 conformal superalgebras, we discuss the (R,d)–points of these automorphism group functors for an arbitrary differential ring (R,d). In particular, if R is an integral domain with certain additional assumptions in the small N = 4 case), these automorphism groups have been completely determined. • Subjects / Keywords 2013-11 • Type of Item Thesis • Degree Doctor of Philosophy • DOI https://doi.org/10.7939/R3FT8DW2S This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law. • Language English • Institution University of Alberta • Degree level Doctoral • Department • Department of Mathematical and Statistical Sciences • Specialization • Mathematics • Supervisor / co-supervisor and their department(s) • Pianzola, Arturo (Department of Mathematical and Statistical Sciences) • Examining committee members and their departments • Yui, Noriko (Queen's University, Department of Mathematics and Statistics) • Bouchard, Vincent (Department of Mathematical and Statistical Sciences) • Kuttler, Jochen (Department of Mathematical and Statistical Sciences) • Chernousov, Vladimir (Department of Mathematical and Statistical Sciences)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8589299321174622, "perplexity": 1990.5803077572837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859904.56/warc/CC-MAIN-20180617232711-20180618012711-00492.warc.gz"}
https://proofwiki.org/wiki/Combination_Theorem_for_Continuous_Functions/Product_Rule
# Combination Theorem for Continuous Functions/Product Rule $f g$ is continuous on $S$ $f g$ is continuous on $S$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977966547012329, "perplexity": 475.1890278702193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00221.warc.gz"}
https://planetmath.org/burnsidestheorem
# Burnside’s Theorem ###### Theorem 1 (Burnside’s Theorem). Let $G$ be a simple group, $\sigma\in G$. Then the number of conjugates of $\sigma$ is not a prime power (unless $\sigma$ is its own conjugacy class). Proofs of this theorem are quite difficult and rely on representation theory. From this we immediately get ###### Corollary 1. A group $G$ of order $p^{a}q^{b}$, where $p,q$ are prime, cannot be a nonabelian simple group. ###### Proof. Suppose it is. Then the center of $G$ is trivial, $\{e\}$, since the center is a normal subgroup and $G$ is simple nonabelian. So if $C_{i}$ are the nontrivial conjugacy classes, we have from the class equation that $\lvert G\rvert=1+\sum\lvert C_{i}\rvert$ Now, each $\lvert C_{i}\rvert$ divides $\lvert G\rvert$, but cannot be $1$ since the center is trivial. It cannot be a power of either $p$ or $q$ by Burnside’s theorem. Thus $pq\mid\lvert C_{i}\rvert$ for each $i$ and thus $\lvert G\rvert\equiv 1\pmod{pq}$, which is a contradiction. ∎ Finally, a corollary of the above is known as the Burnside $p$-$q$ Theorem (http://planetmath.org/BurnsidePQTheorem). ###### Corollary 2. A group of order $p^{a}q^{b}$ is solvable. Title Burnside’s Theorem BurnsidesTheorem 2013-03-22 16:38:14 2013-03-22 16:38:14 rm50 (10146) rm50 (10146) 4 rm50 (10146) Theorem msc 20D05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 23, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901694655418396, "perplexity": 474.10199791168094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00104.warc.gz"}
https://infoscience.epfl.ch/record/153985
Swift Algorithms for Repeated Consensus We introduce the notion of a swift algorithm. Informally, an algorithm that solves the repeated consensus is swift if, in a partial synchronous run of this algorithm, eventually no timeout expires, i.e., the algorithm execution proceeds with the actual speed of the system. This definition differs from other efficiency criteria for partial synchronous systems. Furthermore, we show that the notion of swiftness explains the reason why failure detector based algorithms are typically more efficient than round-based algorithms, since the former are naturally swift while the later are naturally non-swift. We show that this is not an inherent difference between the models, and provide a round structure implementation that is swift, therefore performing similarly to failure detector algorithms while maintaining the advantages of the round model. Published in: Proceedings of the 29th IEEE International Symposium on Reliable Distributed Systems Presented at: 29th IEEE International Symposium on Reliable Distributed Systems, New Delhi, India, October 31 – November 3 Year: 2010 Publisher: Ieee Computer Soc Press, Customer Service Center, Po Box 3014, 10662 Los Vaqueros Circle, Los Alamitos, Ca 90720-1264 Usa Keywords: Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758492469787598, "perplexity": 1787.98216750643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815318.53/warc/CC-MAIN-20180224033332-20180224053332-00016.warc.gz"}
https://ca.assignmentfirst.com/dai-xie-jia-ge-zu-zhi-gu-zai-sheng/
本篇代写价格-组织骨再生讲了PCL被用作组织骨再生的重要成分(Meredith et al. 2003)。组织骨再生是一个亟待解决的重要问题。通过改变PCL的官能团可以提高再生的效率。它有助于提高病人的生活质量。研究发现,所开发的支架具有独特的、优越的物理、机械和生物特性,使其在骨组织再生中发挥作用(Imam et al. 2010)。因此,PCL官能团修饰使组织骨再生具有更大的实用性和功能性。本篇代写价格文章由加拿大第一论文 Assignment First辅导网整理,供大家参考阅读。 For the tissue bone regeneration, the PCL was used as an important ingredient (Meredith et al. 2003). Tissue bone regeneration is an important issue that needs to be contended. The changing of the functional groups of the PCL was found to increase the utility in the regeneration. It aids in the quality of life to the patients. It was found that the developed scaffold had unique and superior physical, mechanical, biological properties to make them useful in the bone tissue regeneration (Imam et al. 2010). Owing to this the the PCL functional groups modification leads to greater utility and functionality of the tissue bone regeneration. The surface of the nanoparticle can be chemically modified to improve the compatibilities with the matric. The N-Octadecyl isocyanate was used as a grafting agent. The PCL was used in the nanocomposite films that was reinforced with the sial whiskers that was produced by the film casting (Anderson et al. 2008). There were significant differences that was reporting according to the nature of the nanoparticle. It was also found that the chemical treatment was found to improve the properties of the Nano composite by altering the thermal behavior of the compounds (Siqueira et al. 2008). Hence N-Octadecyl isocyanate has been used in order to use it as a grafting agent for the formation of the nano composite films. There was copolymerization of the lactide with the lactone kind of monomers. The functional group of the compound was found to be malic acid. The copolymerization of the lactide with the macromolecular monomer of dextran was analyzed. The cell culture of the technology proved to be efficient in the bulk and surface modification of the PLA in the tissue engineering (Wang et al. 2005) . There is the influence of the hydrophilicity and the roughness of the NFM for the biological performance of the compound (Butler, Goldstein, and Guilak 2000). Despite the morphological similarity the natural extracellular matrix, they are found to contribute to the cellular performance (Martins 2009). This is an ongoing research that is yet to be established and used commercialy. 对于加拿大论文代写如何选择先给大家介绍到这里,如果您有论文代写价格需要我们帮助的请联系我们网站客服,我们会24小时在线为您服务。
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059139251708984, "perplexity": 1635.465649223702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00013.warc.gz"}
https://forums.canadiancontent.net/threads/a-faq-is-needed.96094/page-2#post-1737805
A FAQ is needed. Goober Hall of Fame Member Jan 23, 2009 24,691 116 63 Moving the rolled eyes were for the fact that you denied doing what you obviously did. You didn't check to see if I was right, you just jumped in and flat out denied and by extension called me a liar. Oh the comma -No, I did not. or I did not know. And when have I called you a liar? gerryh Time Out Nov 21, 2004 25,756 294 83 Oh the comma -No, I did not. or I did not know. And when have I called you a liar? sorry, thank you. Makes all the difference. Didn't he answer correctly that he did not realize...........8O No, he did not. Grammar and punctuation are amazing things. Used improperly they can change the meaning of a sentence completely. Goober Hall of Fame Member Jan 23, 2009 24,691 116 63 Moving sorry, thank you. Makes all the difference. No, he did not. Grammar and punctuation are amazing things. Used improperly they can change the meaning of a sentence completely. You nit pick to amazing levels of stupidity - Play it what ever way you want- Next my question - When have I ever called you a liar? gerryh Time Out Nov 21, 2004 25,756 294 83 You nit pick to amazing levels of stupidity - Play it what ever way you want- Next my question - When have I ever called you a liar? DaSleeper Trolling Hypocrites May 27, 2007 33,520 1,597 113 Northern Ontario, Just when I thought a spell checker was enough :comma: I will now have to get my legal assistant to proof read everything I post.:roll:8O Goober Hall of Fame Member Jan 23, 2009 24,691 116 63 Moving My question is not. Nov 21, 2004 25,756 294 83 Jan 23, 2009 24,691 116 63 Moving And. Angstrom Hall of Fame Member May 8, 2011 10,659 0 36 Is it Gerry's menstrual cycle period? gerryh Time Out Nov 21, 2004 25,756 294 83 and what? I already said that the post is moot....... does not apply........ you didn't call me a liar....... do I need to use smaller words? Larger font? Goober Hall of Fame Member Jan 23, 2009 24,691 116 63 Moving and what? I already said that the post is moot....... does not apply........ you didn't call me a liar....... do I need to use smaller words? Larger font? You the one that always looks for a reply- well in this case it would be appreciated by me - When have I ever called you a liar? gerryh Time Out Nov 21, 2004 25,756 294 83 You the one that always looks for a reply- well in this case it would be appreciated by me - When have I ever called you a liar? How many times do I need to reply? Goober Hall of Fame Member Jan 23, 2009 24,691 116 63 Moving How many times do I need to reply? To the question I asked. If you prefer not to answer a direct question, then say so. I cannot recall ever calling you a liar- For you to jump to that conclusion was surprising. Next- I think you jumped the gun? gerryh Time Out Nov 21, 2004 25,756 294 83 the rolled eyes were for the fact that you denied doing what you obviously did. You didn't check to see if I was right, you just jumped in and flat out denied and by extension called me a liar. Oh the comma -No, I did not. or I did not know. And when have I called you a liar? sorry, thank you. Makes all the difference. No, he did not. Grammar and punctuation are amazing things. Used improperly they can change the meaning of a sentence completely. and what? I already said that the post is moot....... does not apply........ you didn't call me a liar....... do I need to use smaller words? Larger font? To the question I asked. If you prefer not to answer a direct question, then say so. I cannot recall ever calling you a liar- For you to jump to that conclusion was surprising. Next- I think you jumped the gun? what more do you need goober? DaSleeper Trolling Hypocrites May 27, 2007 33,520 1,597 113 Northern Ontario, Damn those boring Saturdays nights.........:lol: JLM Hall of Fame Member Nov 27, 2008 75,283 527 113 Vernon, B.C. Don't the moderators appear in green at the bottom of the home page? Sal Hall of Fame Member Sep 29, 2007 17,135 33 48 Don't the moderators appear in green at the bottom of the home page? I have looked for a colour map to understand what the various colours mean but couldn't find one. I think green is mod, and Andem is gold and banned is pink or red... or maybe that is just timed out. JLM Hall of Fame Member Nov 27, 2008 75,283 527 113 Vernon, B.C. I have looked for a colour map to understand what the various colours mean but couldn't find one. I think green is mod, and Andem is gold and banned is pink or red... or maybe that is just timed out. And what is "blue"? Sal Hall of Fame Member Sep 29, 2007 17,135 33 48 And what is "blue"? yeah, not sure about that because Ton appears as blue and he is a mod too. SLM The Velvet Hammer Mar 5, 2011 29,151 2 36 London, Ontario Don't the moderators appear in green at the bottom of the home page? Only if they've been online in the past 24 hours. The names at the bottom of the page, if I'm not mistaken, are for members currently online. And what is "blue"? Sub-forum specific mods I think. Like a "Demi-Mod", lol.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102394342422485, "perplexity": 4181.423987400443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00287.warc.gz"}
https://en.wikibooks.org/wiki/Circuit_Theory/Convolution_Integral/Examples/example49/current
# Circuit Theory/Convolution Integral/Examples/example49/current series LRC circuit ... find voltage across the resistor Given that the source voltage is (2t-3t2), find voltage across the resistor. Here focused on finding current first: ## Contents ### Transfer Function ${\displaystyle H(s)={\frac {i}{V_{S}}}={\frac {1}{4+s+{\frac {1}{0.25s}}}}}$ simplify(1/(4 + s + 1/(0.25*s))) ${\displaystyle H(s)={\frac {s}{s^{2}+4s+4}}}$ ### Homogeneous Solution solve(s^2 + 4.0*s + 4.0,s) There are two equal roots at s = -2, so the solution has the form: ${\displaystyle i_{h}(t)=Ae^{-2t}+Bte^{-2t}+C_{1}}$ ### Particular Solution After a long time attached to a unit step function source, the inductor has shorted and the capacitor has opened. All the drop is across the capacitor. The current is zero. ${\displaystyle i_{p}=0}$ ### Initial Conditions So far the full equation is: ${\displaystyle i(t)=Ae^{-2t}+Bte^{-2t}+C_{1}}$ Initial current through the series leg is zero because of the assumed initial conditions of the inductor. This means: ${\displaystyle i(0)=0=A+C_{1}}$ Assuming the initial voltage across the capacitor is zero, then initial voltage drop has to be across the inductor. ${\displaystyle V_{L}(t)=L{di(t) \over dt}=(-2A+B)e^{-2t}-2Bte^{-2t}}$ ${\displaystyle V_{L}(0)=1=-2A+B}$ After a long period of time, the current still has to be zero so: ${\displaystyle C_{1}=0}$ This means that: ${\displaystyle A=0}$ ${\displaystyle B=1}$ ${\displaystyle i(t)=te^{-2t}}$ ${\displaystyle V_{r}(t)=4te^{-2t}}$ The 4 is lost in the numerator of the transfer function if a transfer function is written for Vr initially. The 4 does not make it into the homogeneous solution. In second order analysis, never write a transfer function for a resistor. ### Impulse Solution Taking the derivative of the above get: ${\displaystyle V_{R}\delta (t)=4e^{-2t}-8te^{-2t}}$ ### Convolution Integral ${\displaystyle V_{R}(t)=\int _{0}^{t}(4e^{-2(t-x)}-8(t-x)e^{-2(t-x)})(2x-3x^{2})dx}$ f := (4*exp(-2*(t-x)) - 8*(t-x)exp(-2*(t-x)))*(2*x-3*x^2); S :=int(f,x=0..t) ${\displaystyle V_{R}(t)=8-8e^{-2t}-10te^{-2t}-6t}$ There will not be any constant since again, V_R(t) = 0 after a long time ... and the capacitor opens.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886002779006958, "perplexity": 1483.0607685338528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00422.warc.gz"}
https://www.physicsforums.com/threads/simple-shell.64452/
# Simple shell 1. Feb 21, 2005 ### pattiecake "simple" shell I know this is relatively simple, but I'm a little rusty. Could someone help me out? We want to find the volume of the solid obtained by rotating the region bounded by the curves y=x^4 and y=1 about the line y=7 using the cylindrical shell method. According to my book the general formula for cylindrical shell method is: V=(circumference)(height)(thickness) or (2pi*r)(r*h)(delta r). So I set up the integral as (2pi) integral [7x -x^5] dx. The boundaries are found by setting x^4=1, which yields -1 and 1. After differentiating we have 2pi[7/2x^2-1/6x^6] from -1 to 1. Because of my boundaries, I initially got the volume=0, but I don't think that's possible. I assumed the minus sign should be a plus, but after adding and multiplying by 2pi, I still got the wrong answer. Any clues would be much appreciated! Thanks! Last edited: Feb 21, 2005 2. Feb 21, 2005 ### Galileo You're rotating the region about the line y=7 right? Then if you're using cylindrical shells, you should integrate wtr y (from 0 to 1). The radius of the shell is $7-y^{1/4}$. Try setting up the integral again. Similar Discussions: Simple shell
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907291889190674, "perplexity": 703.827128071449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00627-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/45059-suare-roots-squaring-print.html
# Suare Roots and Squaring • August 1st 2008, 03:31 AM magentarita Suare Roots and Squaring In what way is squaring the inverse of square roots or is it vice-versa? Our teacher said that they are inverses of each other but did not explain why. The first word in the title of this post should be Squaring NOT Suare...spelling error. • August 1st 2008, 04:21 AM nikhil Let f(x)=y=x^2 then x=y^(1/2) or f^-1(x)=x^(1/2) we only consider positive root otherwise result will be a relation not function. Hope this helps • August 1st 2008, 06:06 AM TKHunny Think long and hard on these two statements: $\left(\sqrt{x}\right)^{2} = x$ $\sqrt{x^{2}} = |x|$ • August 1st 2008, 06:20 AM magentarita TKHunny I thank both of you. TKHunny: I understand why [sqrt{x}]^2 = x but I do not understand what the absolute value of x or |x| has to do with sqrt{x^2}. What's the connection in the second case? Thanks • August 1st 2008, 11:43 AM masters Quote: Originally Posted by magentarita I thank both of you. TKHunny: I understand why [sqrt{x}]^2 = x but I do not understand what the absolute value of x or |x| has to do with sqrt{x^2}. What's the connection in the second case? Thanks http://www.mathhelpforum.com/math-he...f46463d2-1.gif In this case Rita, x could very well be a negative number. So, to make sure that you obtain the positive principal root, you must take the absolute value. Just remember this. If you take an even root of any variable raised to any power and the resulting variable has an odd power, you must take the absolute value of the variable. Example: $\sqrt[4]{16x^5y^6z^8}=2|xy|z^2\sqrt[4]{xy^2}$ • August 2nd 2008, 07:24 AM magentarita I get it... I get it now. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872218132019043, "perplexity": 1324.391210675753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642168/warc/CC-MAIN-20140305060722-00080-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quantum-field-theory-in-curved-spacetime.266525/
# Quantum Field Theory in Curved Spacetime? 1. Oct 23, 2008 ### asimov42 Hi all, I have a question about the formulation of quantum field theories in curved spacetime. I'm still learning, and so I might not articulate this very well, but I'm wondering: If a region of spacetime can warp and curve, dynamically changing its shape in response to changes in energy density, then I assume a quantum field defined over the same region would also have to expand / contract? Can the field be thought of as something 'physical', and if so, in the case of expansion, where does the extra 'quantity' of field (for lack of a better description) come from? Or is it better to think of the field as being 'stretched' and 'compressed'? Thanks all. 2. Oct 23, 2008 ### Fredrik Staff Emeritus Think of it as a function that has spacetime as its domain of definition. 3. Oct 24, 2008 ### richard14 question? i am a new student in physics, but is it true that the big bang breaks one of the more imortants laws of physics the conservation of momentum.Since a singularity a creative point of energy exploted in space in order to be done there were suppose to be two particles, a particle and its nemesis to create a big explosion this is called annihilation but were do the pair antipair came from, and if the singularity was pure energy were did it come from E=MC2 4. Oct 24, 2008 ### Fredrik Staff Emeritus The right place to ask that question is in the relativity forum, in a new thread, but there isn't much discussion going on here anyway, so I don't think I'll ruin this thread by answering your question. The "big bang theory" (the simplest version of it anyway) is just a part of the FLRW solutions of Einstein's equation. They define a coordinate system that assigns a time coordinate t>0 to every event in spacetime. (No event has t=0 or t<0). The "big bang" is the limit $t\rightarrow 0$. When you take that limit, some variables go to infinity (e.g. density and pressue) and others go to zero (e.g. the distance between two arbitrary points in "space"). So the big bang isn't an explosion. It's not a point in spacetime. It's not "a creative point of energy" or "pure energy". It's just a name for the funny stuff that happens when you take a certain limit. 5. Oct 25, 2008 ### asimov42 But should the field be thought of as something 'physical'? Or simply as a mathematical representation for some underlying phenomenon? 6. Oct 26, 2008 ### Peter Morgan You should think of the quantum field in curved spacetime in the same way as you think of quantum field theory in Minkowski space -- whatever that is. It's an algebra of operators on a curved spacetime background instead of on a flat spacetime background. The background is non-dynamical, so we can effectively take the background as a given spacetime, otherwise you would be talking about quantum gravity and all hopes of understanding are in the future. Insofar as quantum field theory in Minkowski space is carried through in terms of solutions of the Klein-Gordon and other classical free fields on a flat Minkowski space (i.e. in terms of orthogonal modes of the free field, effectively Fourier analysis), quantum field theory in a curved spacetime is carried out in terms of solutions of the Klein-Gordon and other classical free fields on whatever curved spacetime we have chosen (ie. in terms of orthogonal modes of the free field in the curved spacetime). Wald's book, "Quantum field theory in curved spacetime and black hole thermodynamics", is a pretty good presentation. 7. Oct 26, 2008 ### asimov42 Hi Peter, Thanks - that helps a lot. Just for my own clarification (again as a complete novice), if we say that the background is non-dynamical, we mean that spacetime has a fixed structure (i.e. a fixed, invariant curvature at each point)? 8. Oct 26, 2008 ### Peter Morgan If you understood my comment, you may not be too much of a novice to try reading Wald, right? It doesn't much matter how you fix the geometric structure, but it has to be enough that "the causal behavior of the curved spacetime [is] sufficiently well-behaved that the space of solutions to the classical field equations have the same basic structure as in Minkowski space. As we shall see in section 4.1, the condition of global hyperbolicity ensures that this is the case."[Wald, from the first para of Ch.4] If you need the details, you really have to read Wald or something similar. The Wald is quite readable, and you would very likely learn some interesting stuff about QFT in flat spacetime. 9. Oct 29, 2008 ### Naty1 To express, I hope, the same thoughts another way, this can also be illustrated by noting QFT is a background dependent theory....this means we pick a fixed geometrical background and work with it....like string theory(s). In contrast, general relativity (GR) has a flexible changing "background independent" formulation....via the curvature tensor, no preselected, fixed geometry is independently chosen, instead the background changes dynamically, in the case of GR according to mass, energy and pressure changes....a quantum gravity formulation that IS background independent is loop quantum gravity, but that has not yet been reconciled with GR, and is I think still incomplete.... So the immediate answer is no; that formulation is still being sought and if formulated might combine quantum mechanics and GR......stay tuned!!!! Still plenty of opportunities for Nobel Prizes!!!! One clue that things are still a bit confused are all these formulations... if we REALLY knew what we were talking about, there would be one agreed upon formulation.... "we know much, we understand little" Last edited: Oct 29, 2008 10. Oct 29, 2008 ### Haelfix The curved spacetime is assumed to first order to be nondynamical. EG you are not computing graviton interactions and/or backreactions (although you could in principle). The approximation is pretty good unless you get energies close to the Planck scale. Beyond that, you have to do quantum gravity. The main problem is calculational. You lose many global symmetries working in curved spacetimes, and it makes things really technically challenging (which is why there are only about 3-4 famous results coming from this formalism). In general, the exact form of the metric is usually left blank until late in the calculation (eg its pretty general, with only a few limitations: eg non hyperbolic, or metrics with bad asymptotics cannot be dealt with easily). The good news is that the causal structure of the classical theory is very constraining on the quantum picture. The bad news is spin structure is really hard to deal with and a lot of ones intuition about acceptable quantum observables ceases to be mathematically well defined.. 11. Nov 8, 2008 ### asimov42 Are classical field theories also considered to be background dependent? I.e. formulated over a selected fixed geometrical background? This might be a naive question and may not make sense to ask - I admit, I'm somewhat foggy on this :-) Thanks very much everyone - I appreciate the help. 12. Nov 8, 2008 ### atyy In special relativity, the flat Minkowski metric is fixed and Maxwell's equations are solved. In general relativity, the metric is not fixed and Maxwell's equations are solved simultaneously with the Einstein field equations, so electromagnetic fields cause spacetime curvature. However, if the curvature produced by the electromagnetic fields is small, it is an excellent approximation to ignore them when solving for the metric. For example, in the solar deflection of light, the usual presentation is to fix the metric as that caused by the sun alone, then determine how light propagates on that metric. Last edited: Nov 8, 2008 13. Nov 9, 2008 ### Naty1 Good point. Loop quantum gravity uses a dynamical background but I don't know how far formulations have been developed. Lee Smolin works that area and in THE TROUBLE WITH PHYSICS disccusess problems and opportunities with different avenues of approach. He works at the PERIMETER INSTITUTE ...They have some interesting stuff available on line.. 14. Nov 15, 2008 ### Phrak aty. Re: maxwell on curved spacetime. Are you expressing fields with exterior derivatives or covariant derivatives? just curious
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568637371063232, "perplexity": 681.1105075085709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00335-ip-10-171-10-108.ec2.internal.warc.gz"}
http://treodesktop.com/standard-error/how-to-find-standard-error-for-sample-proportion.php
Home > Standard Error > How To Find Standard Error For Sample Proportion # How To Find Standard Error For Sample Proportion ## Contents Formula Used: SEp = sqrt [ p ( 1 - p) / n] where, p is Proportion of successes in the sample,n is Number of observations in the sample. The confidence interval is computed based on the mean and standard deviation of the sampling distribution of a proportion. Note that some textbooks use a minimum of 15 instead of 10.The mean of the distribution of sample proportions is equal to the population proportion ($$p$$). We then make a slight adjustment to correct for the fact that the distribution is discrete rather than continuous. Normal Distribution Calculator sp is calculated as shown below: To correct Source Copyright © 2016 The Pennsylvania State University Privacy and Legal Statements Contact the Department of Statistics Online Programs English Español Français Deutschland 中国 Português Pусский 日本語 Türk Sign in Calculators Tutorials Autoplay Wenn Autoplay aktiviert ist, wird die Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt. In practice, however, the word estimated'' is dropped and the estimated SE is simply called the SE . The standard error of this estimate is ________. ## Sample Proportion Formula Multiplying the sample size by a factor of 9 (from 40 to 360) makes the SE decrease by a factor of 3. Wenn du bei YouTube angemeldet bist, kannst du dieses Video zu einer Playlist hinzufügen. Difference between means. The standard error of this estimate is ________. It's been fixed. Popular Articles 1. It has already been argued that a proportion is the mean of a variable that is 1 when the individual has a characteristic and 0 otherwise. Population Proportion Your cache administrator is webmaster. Find a Critical Value 7. The sampling distribution of p is the distribution that would result if you repeatedly sampled 10 voters and determined the proportion (p) that favored Candidate A. It follows that the expected size of the miss is . Calculate SE Sample Proportion of Standard Deviation Proportion of successes (p)= (0.0 to 1.0) Number of observations (n)= Binomial SE of Sample proportion= Code to add this calci to your website The following tables show how to find the standard deviation. Standard Error Of Proportion Definition The value of Z.95 is computed with the normal calculator and is equal to 1.96. Wähle deine Sprache aus. Standard Error of the Sample Proportion$SE(\widehat{p})= \sqrt{\frac {p(1-p)}{n}}$If $$p$$ is unknown, estimate $$p$$ using $$\widehat{p}$$The box below summarizes the rule of sample proportions: Characteristics of the Distribution of Sample ProportionsGiven ## Standard Error Of Proportion Formula Melde dich an, um dieses Video zur Playlist "Später ansehen" hinzuzufügen. this contact form The system returned: (22) Invalid argument The remote host or network may be down. If 54 out of 360 students plan to go to graduate school, the proportion of all students who plan to go to graduate school is estimated as ________. Sample 1. P Hat Calculator T Score vs. On the average, a random variable misses the mean by one SD. The margin of error for the difference is 9%, twice the margin of error for the individual percent. http://treodesktop.com/standard-error/how-to-find-standard-error-of-proportion.php p = Proportion of successes. Transkript Das interaktive Transkript konnte nicht geladen werden. Sample Proportion Symbol Schließen Ja, ich möchte sie behalten Rückgängig machen Schließen Dieses Video ist nicht verfügbar. When you are asked to find the sample error, you're probably finding the standard error. ## In other words, the larger your sample size, the closer your sample mean is to the actual population mean. Related Calculators: Vector Cross Product Mean Median Mode Calculator Standard Deviation Calculator Geometric Mean Calculator Grouped Data Arithmetic Mean Calculators and Converters ↳ Calculators ↳ Statistics ↳ Data Analysis Top Calculators You are right…sigma squared is the variance. In the formula for the SE of , the sample size appears (i) in the denominator, and (ii) inside a squareroot. Standard Error Formula The service is unavailable. n2 = Number of observations. Figure 1. The SE uses statistics while standard deviations use parameters. Check This Out The standard deviation of the distribution of sample proportions is symbolized by $$SE(\widehat{p})$$ and equals $$\sqrt{\frac {p(1-p)}{n}}$$; this is known as thestandard error of $$\widehat{p}$$. The binomial distribution is the distribution of the total number of successes (favoring Candidate A, for example) whereas the distribution of p is the distribution of the mean number of successes. Wiedergabeliste Warteschlange __count__/__total__ Large Sample Standard Error 2 sample proportion James Gray AbonnierenAbonniertAbo beenden7474 Wird geladen... Melde dich bei YouTube an, damit dein Feedback gezählt wird. Misleading Graphs 10. A rule of thumb is that the approximation is good if both Nπ and N(1 - π) are greater than 10. Sample of voters. How to Calculate a Z Score 4. Comments are always welcome. Lane Prerequisites Introduction to the Normal Distribution, Normal Approximation to the Binomial, Sampling Distribution of the Mean, Sampling Distribution of a Proportion, Confidence Intervals, Confidence Interval on the Mean Learning Objectives Welcome to STAT 200! The sampling distribution of p is a special case of the sampling distribution of the mean. Resources by Course Topic Review Sessions Central!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9093062877655029, "perplexity": 1804.450702434176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886979.9/warc/CC-MAIN-20180117212700-20180117232700-00491.warc.gz"}
http://math.stackexchange.com/questions/56776/how-fast-can-matrix-multiplication-with-on-integers-be-performed
How fast can matrix multiplication with/on integers be performed? I would like to know how fast a matrix of integers can be multiplied by another matrix of integers. My motivation is because I've got a few ideas that seem to make the multiplication possible in around $O(n^2)$, but maybe this has already been accomplished somewhere else. Most likely I've made a zillion errors or this has been done already... I guess I'd like to know the best methods available, such as Strassen's algorithm. - The best known algorithm is Coppersmith-Winograd, which runs in time $O(n^{2.376})$. It is believed, however, that for every $\epsilon > 0$ there is an algorithm that runs in time $O(n^{2+\epsilon})$. There is an $\Omega(n^2\log n)$ lower bound by Ran Raz for (arithmetic) circuit complexity with some restrictions on the circuit. It will be very surprising if there were an $O(n^2)$ algorithm. Today's asymptotically efficient algorithms are not only complicated, but also impractical. Even Strassen's is only marginally practical. It will thus be even more surprising if there is a simple, practical $O(n^2)$ algorithm. Finally, I don't believe that the restriction to integers makes the problem fundamentally easier. - "Even Strassen's is only marginally practical." - Emphasis on "marginally"... I don't think I've ever seen anything where using Strassen's made for better results... –  J. M. Aug 11 '11 at 2:42 I think the best known Is Coppersmith-Winograd ($O(n^{2.376})$). I think there is a very good presentation on lots of the material, too. I already know it's essentially wrong. The basic idea is to use polynomials or generating functions (I constantly waste time on them) to store information, and then the multiplication can be accomplished by multiplying together copies of the matrixes alongside two "root of unity" filters. This cancels out all extra data generated by the multiplication. However, to extract the information, too much time is required. The multiplication can be finished in $O(n^2)$ using modular arithmatic, though. –  Matt Groff Aug 10 '11 at 21:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826169371604919, "perplexity": 503.63221884538655}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/solving-a-system-of-linear-equations.286164/
# Solving a system of linear equations 1. Jan 20, 2009 ### fluidistic 1. The problem statement, all variables and given/known data Find out if the following system of equation has solution. In case of having solutions, describe parametrically them all and give 2 of them explicitly. 2x-z=4 x-2y+2z=7 3x+2y=1 2. The attempt at a solution I don't see nowhere in my notes what they want by "describe parametrically". Anyway I've solved the system and it has only one solution : x=2, y=-5/2 and z=0. As I just typed them, it is the explicit solution... I don't see how I could give 2 of them if there is only 1 solution. And much less parametrically. Do you know what they mean? 2. Jan 20, 2009 ### Staff: Mentor Your answer is the same as I got. I believe that the problem was asking was that if there were multiple solutions (which would then depend on one or two parameters), you should give two specific solutions. In this case, there is only one solution, and you have given it, so you're done. 3. Jan 20, 2009 ### fluidistic Ah ok!! This is quite possible because there was more than one system of equations to solve. Thank you.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313454985618591, "perplexity": 531.9648703464483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00166.warc.gz"}
https://infoscience.epfl.ch/record/232218?ln=en
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS Abstract We report a study of the decay D-0 -> (KSKS0)-K-0 using 921 fb(-1) of data collected at or near the (sic)(4S) and (sic)(5S) resonances with the Belle detector at the KEKB asymmetric energy e(+)e(-) collider. The measured time-integrated CP asymmetry is A(CP)(D-0 -> (KSKS0)-K-0) = (-0.02 +/- 1.53 +/- 0.02 +/- 0.17)%, and the branching fraction is B(D-0 -> (KSKS0)-K-0) = (1.321 +/- 0.023 +/- 0.036 +/- 0.044) x 10(-4), where the first uncertainty is statistical, the second is systematic, and the third is due to the normalization mode (D-0 -> K-S(0)pi(0)). These results are significantly more precise than previous measurements available for this mode. The A(CP) measurement is consistent with the standard model expectation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708356261253357, "perplexity": 4866.682763643697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00534.warc.gz"}
https://math.stackexchange.com/questions/2587245/proof-of-theorem-7-chapter-5-in-hoffman-and-kunzes-linear-algebra-is-unclea
# Proof of Theorem 7 (Chapter 5) in Hoffman and Kunze's *Linear Algebra* is unclear Let $V$ be a free module of rank $n$ over a commutative ring $K$ with identity. We denote the space of all $r$-linear forms on $V$ by $M^r(V)$ and the space of all alternating $r$-linear forms by $\Lambda^r(V)$. For $L \in M^r(V)$ and any permutation $\sigma$ of $\{1,\dots,r\}$, we obtain another $r$-linear function $L_\sigma$ by defining $$L_\sigma(\alpha_1,\dots,\alpha_r) = L(\alpha_{\sigma 1},\dots,\alpha_{\sigma r})$$ for all $(\alpha_1,\dots,\alpha_r) \in V^r$. For each $L \in M^r(V)$, we define the alternating $r$-linear function $\pi_r L$ by $$\pi_r L = \sum_\sigma (\operatorname{sgn} \sigma) L_\sigma$$ where the sum is over all permutations $\sigma$ of $\{1,\dots,r\}$. Now, Theorem 7 of Chapter 5 in Hoffman and Kunze's Linear Algebra states the following: Theorem $7$. Let $K$ be a commutative ring with identity and let $V$ be a free $K$-module of rank $n$. If $r > n$, then $\Lambda^r(V) = \{0\}$. If $1 \leq r \leq n$, then $\Lambda^r(V)$ is a free $K$-module of rank $\binom{n}{r}$. Proof. Suppose $\{ \beta_1,\dots,\beta_n \}$ is an ordered basis for $V$ with dual basis $\{ f_1,\dots,f_n\}$. If $L \in M^r(V)$, then $$L = \sum_H L(\beta_{h_1},\dots,\beta_{h_r})\ f_{h_1}\! \otimes \dots \otimes f_{h_r} \tag{5-37}$$ where the sum extends over all $r$-tuples $H = (h_1,\dots,h_r)$ of integers between $1$ and $n$. If $L \in \Lambda^r(V)$, this sum need be extended only over the $r$-tuples $H$ for which $h_1,\dots,h_r$ are distinct because if $L$ is alternating then $$L(\beta_{h_1},\dots,\beta_{h_r}) = 0$$ whenever two subscripts $h_i$ are the same. If $r > n$ then in each $r$-tuple some integer must be repeated. Thus $\Lambda^r(V) = \{0\}$ if $r > n$. Now, suppose $1 \leq r \leq n$. We define an $r$-shuffle of $\{ 1,\dots, n\}$ to be an $r$-tuple $J = (j_1,\dots,j_r)$ such that $1 \leq j_1 < \dots < j_r \leq n$. There are $$\binom{n}{r} = \frac{n!}{r!(n-r)!}$$ such shuffles. Suppose we fix an $r$-shuffle $J$. Let $L_J$ be the sum of all the terms in $(5\text{-}37)$ for which the indexing $r$-tuple $H$ is a permutation of the $r$-shuffle $J$. If $\sigma$ is a permutation of $\{1,\dots,r\}$, then $$L(\beta_{j_{\sigma 1}},\dots,\beta_{j_{\sigma r}}) = (\operatorname{sgn} \sigma) L(\beta_{j_1},\dots,\beta_{j_r}).$$ Thus, $$L_J = L(\beta_{j_1},\dots,\beta_{j_r}) D_J \tag{5-38}$$ where \begin{align} D_J &= \sum_\sigma (\operatorname{sgn} \sigma)\ f_{j_{\sigma 1}}\! \otimes \dots \otimes f_{j_{\sigma r}} \tag{5-39}\\ &= \pi_r(f_{j_1}\! \otimes \dots \otimes f_{j_r}). \end{align} We see from $(5\text{-}39)$ that each $D_J$ is alternating and that $$L = \sum_{\text{shuffles J}} L(\beta_{j_1},\dots,\beta_{j_r}) D_J \tag{5-40}$$ for every $L$ in $\Lambda^r(V)$. The assertion is that the $\binom{n}{r}$ forms $D_J$ constitute a basis for $\Lambda^r(V)$. We have seen that they span $\Lambda^r(V)$. It is easy to see that they are independent. Hence, proved. My doubt is how in Equation $(5\text{-}39)$ we can go from the first line to the second line. It does not seem to follow directly from the definition of $\pi_r L$, because $$\pi_r(f_{j_1}\! \otimes \dots \otimes f_{j_r}) := \sum_\sigma (\operatorname{sgn} \sigma) (f_{j_1}\! \otimes \dots \otimes f_{j_r} )_\sigma \stackrel{?}{=} \sum_{\sigma} (\operatorname{sgn} \sigma)\ f_{j_{\sigma 1}}\! \otimes \dots \otimes f_{j_{\sigma r}}.$$ If someone can give me a step-by-step proof of the equality it would be really helpful. • Good job polishing Hoffman & Kunze for the 21st century! Let me add that the notion of an "$r$-shuffle" they use is completely nonstandard. – darij grinberg Jan 1 '18 at 21:51 Lemma: Let $1\leq r \leq n$. Consider the $r$-tuple $(j_1,\dots,j_r)$, $1 \leq j_1,\dots,j_r \leq n$. Let $\sigma$ be a permutation of $\{1,\dots,r\}$. Then, we have $$f_{j_{\sigma 1}}\! \otimes \dots \otimes f_{j_{\sigma r}} = (f_{j_1}\! \otimes \dots \otimes f_{j_r})_{\sigma^{-1}}.$$ Proof: For $(\alpha_1,\dots,\alpha_r) \in V^r$, we have \begin{align} (f_{j_{\sigma 1}}\! \otimes \dots \otimes f_{j_{\sigma r}})(\alpha_1,\dots,\alpha_r) &= f_{j_{\sigma 1}}(\alpha_1) \cdots f_{j_{\sigma r}}(\alpha_r) \\ &= f_{j_1}(\alpha_{\sigma^{-1} 1}) \cdots f_{j_r}(\alpha_{\sigma^{-1} r}) &&\text{(by rearranging terms)} \\ &= (f_{j_1}\! \otimes \dots \otimes f_{j_r})(\alpha_{\sigma^{-1} 1},\dots,\alpha_{\sigma^{-1} r}) \\ &= (f_{j_1}\! \otimes \dots \otimes f_{j_r})_{\sigma^{-1}}(\alpha_1,\dots,\alpha_r). \end{align} Since $(\alpha_1,\dots,\alpha_r)$ was an arbitrary element of $V^r$, we get $$f_{j_{\sigma 1}}\! \otimes \dots \otimes f_{j_{\sigma r}} = (f_{j_1}\! \otimes \dots \otimes f_{j_r})_{\sigma^{-1}}.$$ Hence, proved. Using this lemma, we show that equation $(5\text{-}39)$ is correct as follows. \begin{align} \pi_r(f_{j_1}\! \otimes \dots \otimes f_{j_r}) &:= \sum_{\sigma} (\operatorname{sgn} \sigma) (f_{j_1}\! \otimes \dots \otimes f_{j_r})_\sigma\\ &= \sum_\sigma (\operatorname{sgn} \sigma)\ f_{j_{\sigma^{-1} 1}}\! \otimes \dots \otimes f_{j_{\sigma^{-1} r}} &&(\text{by above lemma})\\ &= \sum_\sigma (\operatorname{sgn} \sigma^{-1})\ f_{j_{\sigma^{-1} 1}}\!\otimes \dots \otimes f_{j_{\sigma^{-1} r}} &&(\because \operatorname{sgn} \sigma = \operatorname{sgn} \sigma^{-1}). \end{align} Now, as $\sigma$ runs (once) over all permutations of $\{ 1,\dots,r \}$, so does $\sigma^{-1}$. Therefore, $$\pi_r(f_{j_1}\! \otimes \dots \otimes f_{j_r}) = \sum_{\tau}(\operatorname{sgn} \tau)\ f_{j_{\tau 1}}\! \otimes \dots \otimes f_{j_{\tau r}}$$ where the sum is taken over all permutations $\tau$ of $\{ 1,\dots,r \}$. But, the right-hand side is precisely $D_J$. Hence, proved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999707937240601, "perplexity": 210.84670882928577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00282.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Experimental/Diffusion_Ordered_Spectroscopy_(DOSY)
# Diffusion Ordered Spectroscopy (DOSY) Diffusion Ordered SpectroscopY (DOSY) utilizes magnetic field gradients to investigate diffusion processes occurring in solid and liquid samples. ## Theory ### Spin Diffusion In the classic formation of a spin-echo (i.e. 90o-$$\tau$$-180o-2$$\tau$$) the intensity of the echo will damp according to T2 relaxation processes. However, in the presence of inhomogeneities in the magnetic field, if a spin drifts to a location in which the magnetic field is different, the Larmor frequency changes and the spin will not refocus with the remaining spins. We can then re-write the bloch equation to account for diffusion processes $\frac{dM}{dt}=\gamma MB-\frac{M_x+M_y}{T_2}-\frac{M_z-M_0}{T_1}+D\nabla ^2M$ Typically, the changes in the Larmor frequency have a negligible effect. However, application of a Gradient, G, will induce a large change in larmor frequencies experienced at places in the sample. The principle behind the DOSY experiment is the formation of a magnetic field gradient echo. This is a spin-echo in which a magnetic field gradient is applied during the spin evolution. Thus spins that diffuse during this time will not refocus in the spin echo, as the application of the gradient will impart their own magnetic field. ## Pulse Program The sequence consists of a conventional Hahn echo (spin echo) sequence in which 2 gradient pulses are applied at equal timings after the 90 and 180 degree pulses. The gradients are opposite in magnitude. The basic principle relies on the diffusion of spins in the sample. Initially the 90 degree pulse tips the magnetization into the x-y plane, where spins begin to precess with their characteristic Larmor frequencies. The application of a gradient sometime after the 90 degree pulse, encodes a spatial component to the spin. That is the gradient is not uniform over the sample and therefore, the processional frequencies will change. Next the application of the 180 flips the magnetization to refocus the spins. However, the spins, due to the application of the gradient will not refocus. The application of a gradient at the same time (but with opposite direction) will refocus the spins at a total time of 2 $$\tau$$. If a spin diffuses to a different place in the sample, the refocusing will not occur, leading the dampening of the echo intensity. Below is the basic pulse sequence for a pulsed field gradient experiment. ## Processing Processing the data is fairly straight forward. Apply a Fourier Transformation along the F2 dimension. The F1 dimension, in which either the time or the gradient was incremented remains in the time domain. The echo intensity of a given peak is then described by: $\ln \mu (g_a, t_c)-\ln \mu(0, t_c)=-C_n \gamma ^2 D \delta ^2 g_a ^2 t_D$ where $$\mu((g_a, t_c)$$ is the echo amplitude after the gradient application, $$ln m(0, t_c)$$ is the echo amplitude with no gradient applied, Cn is a constant that depends on the particular pulse sequence used, $$\gamma$$ is the gyromagnetic ratio, D is the diffusion coefficient, $$\delta$$ is the width of the applied gradient, and tis the diffusion time. ## Analysis Use this feature to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694626092910767, "perplexity": 931.3692875900101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578650225.76/warc/CC-MAIN-20190424154437-20190424180437-00327.warc.gz"}
https://gmatclub.com/forum/points-m-and-p-lie-on-square-lnqr-and-lm-lq-what-is-the-162164.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Jun 2019, 21:59 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Points M and P lie on square LNQR, and LM = LQ. What is the Author Message TAGS: Hide Tags Intern Joined: 23 Jan 2013 Posts: 6 Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags Updated on: 25 Oct 2013, 09:11 3 30 00:00 Difficulty: 25% (medium) Question Stats: 76% (01:43) correct 24% (01:47) wrong based on 952 sessions HideShow timer Statistics Attachment: Untitled2.png [ 7.24 KiB | Viewed 29560 times ] Points M and P lie on square LNQR, and LM = PQ. What is the length of the line segment PQ? (1) $$PR=\frac{4\sqrt{10}}{3}$$ (2) The ratio of the area of the unshaded region to the total area of the shaded region is 2 to 1. Originally posted by Pmar2012 on 25 Oct 2013, 08:46. Last edited by Bunuel on 25 Oct 2013, 09:11, edited 2 times in total. Edited the question. Math Expert Joined: 02 Sep 2009 Posts: 55631 Re: Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags 25 Oct 2013, 09:09 16 9 Points M and P lie on square LNQR, and LM = PQ. What is the length of the line segment PQ? (1) $$PR=\frac{4\sqrt{10}}{3}$$. We know two sides (PR and PQ) in right triangle PQR, thus we can find the third side PQ. Sufficient. (2) The ratio of the area of the unshaded region to the total area of the shaded region is 2 to 1. Say LM = PQ = x, then the area of the shaded region is 2*(1/2*4*x)=4x. The area of unshaded region is 4*4-4x=16-4x. Thus we have that (unshaded)/(shaded)=(16-4x)/4x=2/1. We can find x. Sufficient. _________________ General Discussion Senior Manager Joined: 13 May 2013 Posts: 415 Re: Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags 11 Dec 2013, 09:53 Points M and P lie on square LNQR, and LM = PQ. What is the length of the line segment PQ? (1) PR=\frac{4\sqrt{10}}{3} This is pretty self explanatory. Sufficient. (2) The ratio of the area of the unshaded region to the total area of the shaded region is 2 to 1. Because we know that LM = PQ, and that the figure is a square, we know that both shaded triangles are equal to one another. We know that the total area of the square (including the shaded triangles) = 16. If the ratio of unshaded to shaded is 2:1 we can set up an equation. x:16 = 2:3 Where x represents the unshaded portion relative to the entire square and 2:3 represents the ratio given to us. x = 32/3. Now we can find the remaining area of the triangles (which are both equal to one another). Finally, we plug in the area and the one given base length (4) into the area of a triangle formula to get the missing length (i.e. PQ or ML.) Sufficient. D Manager Joined: 05 Jul 2015 Posts: 97 GMAT 1: 600 Q33 V40 GPA: 3.3 Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags 21 Nov 2015, 19:19 1 Easy solution: A) Obviously sufficient because a^2+b^2=c^2 B) The white to gray is 2:1 so the gray is 1/3 of the total area. The area is 4*4. So the missing line segment is (4/3). ANS: D Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 7462 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags 26 Nov 2015, 08:00 Forget conventional ways of solving math questions. In DS, Variable approach is the easiest and quickest way to find the answer without actually solving the problem. Remember equal number of variables and independent equations ensures a solution. Points M and P lie on square LNQR, and LM = PQ. What is the length of the line segment PQ? (1) PR=410 − − √ 3 (2) The ratio of the area of the unshaded region to the total area of the shaded region is 2 to 1. We can know PQ if we know PR, so there is one variable (PR), and 2 equations are given by the conditions, so there is high chance (D) will be the answer. For condition 1, from (4 sqrt 10/3)^2-4^2=PQ^2. PQ=4/3. This is sufficient For condition 2, if the ratio of the area is 2:1, the height is the same, so NP:PQ=2:1, and PQ=4/3 so this is sufficient as well, and the answer becomes (D). For cases where we need 1 more equation, such as original conditions with “1 variable”, or “2 variables and 1 equation”, or “3 variables and 2 equations”, we have 1 equation each in both 1) and 2). Therefore, there is 59 % chance that D is the answer, while A or B has 38% chance and C or E has 3% chance. Since D is most likely to be the answer using 1) and 2) separately according to DS definition. Obviously there may be cases where the answer is A, B, C or E. _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only \$149 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Manager Joined: 23 Dec 2013 Posts: 142 Location: United States (CA) GMAT 1: 710 Q45 V41 GMAT 2: 760 Q49 V44 GPA: 3.76 Re: Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags 22 Jul 2017, 19:17 1 Pmar2012 wrote: Attachment: Untitled2.png Points M and P lie on square LNQR, and LM = PQ. What is the length of the line segment PQ? (1) $$PR=\frac{4\sqrt{10}}{3}$$ (2) The ratio of the area of the unshaded region to the total area of the shaded region is 2 to 1. The goal is to find then length of PQ. Statement 1) PR = 4sqrt(10)/3 By the Pythagorean theorem, we can determine that x^2 + 4^2 = 16/9*10 x^2 = 160/9-16 x^2 = 17.77-16 Sufficient. Statement 2) The ratio of the area of the unshaded region to the total area of the shaded region is 2 to 1. Let PQ = x The area of the shaded region is 2(1/2*4*x) = 4x The total area of the square is 4^2 = 16 (16-4x)/(4x) = 2/1 16-4x = 8 x 16 = 12x 4 = 3x 4/3 = x Sufficient. Director Joined: 12 Feb 2015 Posts: 862 Re: Points M and P lie on square LNQR, and LM = LQ. What is the  [#permalink] Show Tags 01 Jan 2019, 11:00 St1:- Apply pythagoras theorem as the lengths of two sides are given and one is missing. St2:- Apply the formula for area of a triangle formula to find the length of the side. Option D is the correct answer _________________ "Please hit +1 Kudos if you like this post" _________________ Manish "Only I can change my life. No one can do it for me" Re: Points M and P lie on square LNQR, and LM = LQ. What is the   [#permalink] 01 Jan 2019, 11:00 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273332715034485, "perplexity": 1787.1536353443985}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00126.warc.gz"}
http://link.springer.com/article/10.1007%2FBF02579150
Combinatorica , Volume 4, Issue 4, pp 373–395 # A new polynomial-time algorithm for linear programming • N. Karmarkar Article DOI: 10.1007/BF02579150 Karmarkar, N. Combinatorica (1984) 4: 373. doi:10.1007/BF02579150 ## Abstract We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requiresO(n3.5L) arithmetic operations onO(L) bit numbers, wheren is the number of variables andL is the number of bits in the input. The running-time of this algorithm is better than the ellipsoid algorithm by a factor ofO(n2.5). We prove that given a polytopeP and a strictly interior point a εP, there is a projective transformation of the space that mapsP, a toP′, a′ having the following property. The ratio of the radius of the smallest sphere with center a′, containingP′ to the radius of the largest sphere with center a′ contained inP′ isO(n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial time. 90 C 05 ## Authors and Affiliations • N. Karmarkar • 1 1. 1.AT&T Bell LaboratoriesMurray HillU.S.A.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932860493659973, "perplexity": 1249.0031863687336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00319-ip-10-171-6-4.ec2.internal.warc.gz"}
https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_1819273
English # Item ITEM ACTIONSEXPORT Released Report #### Incrementally Maintaining the Number of l-cliques ##### MPS-Authors /persons/resource/persons44519 Grandoni,  Fabrizio Discrete Optimization, MPI for Informatics, Max Planck Society; ##### External Ressource No external resources are shared ##### Fulltext (public) There are no public fulltexts stored in PuRe ##### Supplementary Material (public) There is no public supplementary material available ##### Citation Grandoni, F.(2002). Incrementally Maintaining the Number of l-cliques (MPI-I-2002-1-002). Saarbrücken: Max-Planck-Institut für Informatik. Cite as: http://hdl.handle.net/11858/00-001M-0000-0014-6C9B-B ##### Abstract The main contribution of this paper is an incremental algorithm to update the number of $l$-cliques, for $l \geq 3$, in which each node of a graph is contained, after the deletion of an arbitrary node. The initialization cost is $O(n^{\omega p+q})$, where $n$ is the number of nodes, $p=\lfloor \frac{l}{3} \rfloor$, $q=l \pmod{3}$, and $\omega=\omega(1,1,1)$ is the exponent of the multiplication of two $n x n$ matrices. The amortized updating cost is $O(n^{q}T(n,p,\epsilon))$ for any $\epsilon \in [0,1]$, where $T(n,p,\epsilon)=\min\{n^{p-1}(n^{p(1+\epsilon)}+n^{p(\omega(1,\epsilon,1)-\epsilon)}),n^{p \omega(1,\frac{p-1}{p},1)}\}$ and $\omega(1,r,1)$ is the exponent of the multiplication of an $n x n^{r}$ matrix by an $n^{r} x n$ matrix. The current best bounds on $\omega(1,r,1)$ imply an $O(n^{2.376p+q})$ initialization cost, an $O(n^{2.575p+q-1})$ updating cost for $3 \leq l \leq 8$, and an $O(n^{2.376p+q-0.532})$ updating cost for $l \geq 9$. An interesting application to constraint programming is also considered.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831793487071991, "perplexity": 1199.3331463411164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00585.warc.gz"}
http://mathoverflow.net/feeds/question/107774
Number of 2-dimensional irreducible representations of a finite group ? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T09:50:27Z http://mathoverflow.net/feeds/question/107774 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/107774/number-of-2-dimensional-irreducible-representations-of-a-finite-group Number of 2-dimensional irreducible representations of a finite group ? Alexander Chervov 2012-09-21T14:19:57Z 2012-09-22T08:58:23Z <p><strong>Question:</strong> What is the number of two-dimensional irreducible representations of a finite group ? How it can be expressed in groups-theoretic terms ? (Number of 1-dimensional irreps is |G/[G,G]| ).</p> <p>The question is somewhat naive and actually I heard it from our teachers when I was an undergrad many years ago, it was always outspoken with some kind of mysterious flavour - "nobody knows, but may be...". </p> <hr> <p>Some analogies:</p> <p>There is a <a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&amp;jrnid=faa&amp;paperid=1751&amp;option_lang=eng" rel="nofollow">paper</a> by V. Drinfeld 1981, which title is "Number of two-dimensional irreducible representations of the fundamental group of a curve over a finite field". </p> <p>The main theorem express the number via the zeta function of the curve over F_q. (Russian pdf is for free - main formula can be seen from there). Of course, it is very specific class of the groups, however may be something can be done ?</p> <p>Another analogy which comes to mind is related to topological quantum field theories, quantization of Wess-Zumino and Chern-Simons models. One consider the moduli space of d-dimensional representatation of the fundamental group. It is natarally symplectic manifold and its VOLUME is somewhat an anologue of the "number" of irreps for finite group. The volume can be calculated and is related to the famous Verlinde formula.</p> <p>So, it is of course, both considerations are related to fundamental (=Galois) groups of CURVES. </p> <p><strong>Question:</strong> WHY? What makes fundamental (=Galois) groups of curves so specific ? Can it be somehow generalized to other classes of curves ? </p> http://mathoverflow.net/questions/107774/number-of-2-dimensional-irreducible-representations-of-a-finite-group/107805#107805 Answer by Geoff Robinson for Number of 2-dimensional irreducible representations of a finite group ? Geoff Robinson 2012-09-21T21:33:44Z 2012-09-22T08:58:23Z <p>This is not a complete answer in any sense, but I will make a few comments. The irreducible subgroups $G$ of ${\rm GL}(2,\mathbb{C})$ are the primitive ones, which have $G/Z(G)$ isomorphic to $A_{4},S_{4}$ or $A_{5}$, and imprimitive groups, which have an Abelian normal subgroup of index $2.$ On the other hand, any finite group with an Abelian normal subgroup of index $2$ has all its irreducible representations of degree $1$ or $2,$ so the number of $2$-dimensional irreducible representations is easily calculated. A more careful analysis of the primitive case shows that if $G$ has a faithful $2$-dimensional primitive complex representations, then $G = Z(G)E,$ where $E \cong {\rm SL}(2,3), {\rm GL}(2,3),{\rm SL}(2,5)$ or the binary icosahedral group (also, a double cover of order $48$ of $S_{4},$ (as ${\rm GL}(2,3)$ is), but with a generalized quaternion Sylow $2$-subgroup). Now let $G$ be any finite group, and let $K$ be the intersection of the kernels of the irreducble complex representation of $G$ of degree at most $2$. The above discussion means that the only possible non-Abelian composition factor of $G/K$ is $A_{5}$, though it may be repeated if it appears. The answer to your question only depends on the structure of $G/K,$ so we may reduce to the case that all composition factors of $G$ are cyclic or $A_{5}.$ Also, by Clifford theory, we may suppose that the Fitting subgroup $F(G)$ is a direct product of an Abelian group of odd order and a $2$-group, and that all components of $G$ (if there are any) are isomorphic to ${\rm SL}(2,5).$ Further analysis can be carried out, but I believe that the analysis is not entirely straightforward. Perhaps this outline will help others to complete it, so I submit it.</p>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221264123916626, "perplexity": 284.2495307730576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705926946/warc/CC-MAIN-20130516120526-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/265618/prove-that-int-01-ln-left-frac1-a-x1-a-right-frac1-ln-x-math
# Prove that $\int_{0}^{1} \ln\left(\frac{1-a x}{1-a}\right) \frac{1}{\ln x} \mathrm{dx} = -\sum_{k=1}^{\infty} a^{k} \frac{\ln(1+k)}{k}, \space a<1$ Prove that $$\int_{0}^{1} \ln\left(\frac{1-a x}{1-a}\right) \frac{1}{\ln x} \mathrm{dx} = -\sum_{k=1}^{\infty} a^{k} \frac{\ln(1+k)}{k}, \space a<1$$ I find this question rather troublesome since the both sides seem hard to compute. I appreciate any hint/suggestion. Thanks! - Develop series expansion of the logarithm of the quotient: $$\ln \left( \frac{1-a x}{1-a} \right) = \ln\left(1-a x\right) - \ln\left(1-a\right) = \sum_{k=1}^\infty \frac{1}{k} a^k \left( 1-x^k \right)$$ It is now down to evaluation of $$\begin{eqnarray} \int_0^1 \frac{1-x^k}{\ln(x)} \mathrm{d}x &\stackrel{x = \exp(-t)}{=}& -\int_0^\infty \mathrm{e}^{-t} \frac{1-\exp(-k t)}{t} \mathrm{d}t \\ &=& \int_0^{k} \frac{\mathrm{d}}{\mathrm{d}m } \left(-\int_0^\infty \mathrm{e}^{-t} \frac{1-\exp(-m t)}{t} \mathrm{d}t\right)\mathrm{d}m \\ &=& -\int_0^k \int_0^\infty \exp(-t(m+1)) \mathrm{d}m \\ &=& -\int_0^k \frac{\mathrm{d}m}{m+1} = -\log(k+1) \end{eqnarray}$$ - nice answer! Since $\int_0^1 \frac{1-x^k}{\ln(x)} \mathrm{d}x$ is a Frullani integral, the answer is straightforward. –  Chris's sis the artist Dec 26 '12 at 23:46 Thanks. For those uninitiated, here is a link to Frullani's integral page on MathWorld. –  Sasha Dec 26 '12 at 23:48 isn't the Taylor form you used above available only for the case when $|ax|<1$ ? –  Chris's sis the artist Dec 27 '12 at 9:15 Yes, but $|a x| < |a| < 1$ –  Sasha Dec 27 '12 at 13:51 how about $a<-1$ ? –  Chris's sis the artist Dec 27 '12 at 13:54 Here's another way to handle the integral appearing in @Sasha's answer. $$\begin{eqnarray*} \int_0^1 dx \, \frac{1-x^k}{\log x} &=& \int_\infty^{0} d\alpha\, \frac{\partial}{\partial \alpha} \int_0^1 dx \, \frac{1-x^k}{\log x} x^\alpha \\ &=& \int_\infty^{0} d\alpha\, \int_0^1 dx \, (1-x^k)x^\alpha \\ &=& \int_\infty^{0} d\alpha\, \left(\frac{1}{\alpha+1} - \frac{1}{k+\alpha+1}\right) \\ &=& \left.\log\frac{\alpha+1}{k+\alpha+1}\right|_\infty^{0} \\ &=& -\log(k+1) \end{eqnarray*}$$ - Nice solution (+1) –  Chris's sis the artist Dec 27 '12 at 8:40 @Chris'ssister: Thanks, another interesting question! (+1) –  user26872 Dec 27 '12 at 8:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105631113052368, "perplexity": 1481.1749802997672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988250.59/warc/CC-MAIN-20150728002308-00331-ip-10-236-191-2.ec2.internal.warc.gz"}
https://gcamp6f.com/2014/02/15/beyond-correlation-analysis-part-2/
## Beyond correlation analysis: mutual information Last time, I mentioned a website which gives an overview of methods to analyze neuronal (and other) networks. Let’s have a closer look. Here’s a list of the methods: • Cross-correlation (the standard method) • Mutual Information • Incremental Mutual Information • Granger Causality • Transfer Entropy • Incremental Transfer Entropy • Generalized Transfer Entropy • Bayesian Inference • Anatomical Reconstruction To be honest, I never heard of most of them. So let’s simply go into it and start with ‘Mutual Information’. It is based on entropy reduction. Entropy is a measure for the uncertainty about a variable. So, mutual information is the reduction of uncertainty about a variable X if you know everything about another variable Y. Here is the definition (Peter E. Latham and Yasser Roudi (2009) Mutual information. Scholarpedia, 4(1):1658): $I(X;Y) = \sum_{x,y} P_{XY}(x,y) \log \left( \frac{P_{XY}(x,y)}{P_X(x) P_Y(y)} \right)$ These probability distributions can be thought of as the distribution of the membrane voltage or activity values of single neurons or bigger brain areas. If two activities are completely correlated, the one activity contains all information about the other neuron, i.e., the mutual information is high; if they have nothing to do with each other – in other words: $P_{XY}(x,y) = P_X(x) P_Y(y)$, which eliminates the logarithm -, the mutual information is zero. So what is the difference to cross-correlation, except for the fact that the formalism seems to be more complicated? Imagine a szenario where the activity of neuron X is not correlated to the activity of neuron Y, but to the square or cube of the activity of neuron Y. This is something which would not be captured appropriately by a simple correlation analysis which is based on covariance $\langle X\,Y\rangle$; but it would be captured e.g. by the measure $\langle X^2\,Y\rangle$ or $\langle X\,Y^3\rangle$ – and also by mutual information. Is this likely to play a role? Maybe yes. Information processing in neurons is highly non-linear (there is a threshold and a saturation for firing activity, and only a small regime of the input/output-curve can be linearized). Our point of departure mentions that this method is not appropriate for calcium imaging data. I do not really see this point. So let’s simply try out this formalism on the data which I analyzed in the last post. To get a distribution from the data, we can either fit a smooth distribution, or we can use bins (say, 15-30 bins for every neuron) and thereby create a discrete probability distribution function (like, a histogram). If we create too many bins, we would not find any mutual information; if we create only two bins for each neuron (corresponding to on/off), we would certainly detect some mutual information. Also, you could imagine taking non-equidistant activity bins. Some very ugly nested for-loops in Matlab later (number neurons x number neurons x number bins x number bins), I get $I(X;Y)$. It is instructive to draw it like the correlation matrizes: Left: Correlation analysis matrix. Right: Mutual information analysis matrix. Note that the values on the the diagonal are higher and not all the same for the mutual information matrix. Looks quite – similar … but there are some differences: 1. Anticorrelation (Neuron #6) is not shown with a specially low value; this is to be expected, as information “doesn’t care about the sign”, contrary to correlation analysis. From correlation analysis, it looks like really strong correlation, whereas mutual information shows that the information dependence of these neurons is not as high for the anti-correlated as for the correlated neurons. Whom should we trust? Nobody of course. 2. The diagonal elements don’t have all the same value (for correlation matrizes, it is always 1). I don’t know if this is important or not. 3. Neuron #4 shows a strange information dependence on the noise neurons #7-8,9,15-16. This is an artefact. It also remains if I shuffle the activity of all neurons temporally, so that every correlation should be destroyed. Where does this artifact come from? – When I created the probability distributions, I divided the activity for each neuron in 25 bins. For most neurons, this gave roughly a gaussian profile. For neuron #4, however, it’s rather like silence most of the time, whereas the activity is limited to small time windows (cf. last blog entry, fig.1). Therefore, by this procedure, most of the time points fall into the first, ‘low activity’ bin. This leads to a high value in the denominator of the formula for this bin, which leads to a very low value for the argument of the logarithm, which leads to a very high value for the logarithm; which in turn is sufficient to pretend information exchange. From which follows, you need to write more sophisticated algorithms in order to overcome such problems. This entry was posted in Data analysis, Network analysis. Bookmark the permalink. ### 6 Responses to Beyond correlation analysis: mutual information 1. fahim says: Hello Due to the fact that TOLBOX is not mutual information in MATLAB software, please send TOLBOX and the code that you use. 2. Hello Fahim, for the analysis shown in this blog post, I just wrote my own mutual information code (for coding practice reasons), which is not really optimized in terms of computational cost. However, there are toolboxes available. A short search brings up several of them on Mathworks File Exchange. Some of those require MEX files to be compiled, which can be tedious, depending on your OS. The one without MEX files that I would recommend at first glance – because it seems to be nice and transparent – is this one: https://www.mathworks.com/matlabcentral/fileexchange/35625-information-theory-toolbox Although I have to say that I didn’t go through the code carefully. • fahim says: Thanks for the Rupprecht This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114967942237854, "perplexity": 983.0052111606831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00071.warc.gz"}
https://rationalsolver.com/16574/for-each-subsequence-convergent-then-conclude-convergent?show=16575
If $(x_n)$ is a sequence in $\mathbb R$ such that for each $m \in\mathbb N$ with $m > 1$,  the subsequence $(x_{mn})$ of $(x_n)$ is convergent, then can we conclude that $(x_n)$ is also  convergent? No. We can't conclude anything. Note that for each $m,n>1$, $mn$ is always a composite number. So we can define a sequence which takes a constant value $c_1$  on each composite number and another constant value $c_2$ on prime numbers. Then we can see that each of its subsequence of the form $(x_{mn})$ will converge to $c_1$ whereas the subsequence $(x_p)$ where $p$ runs over the prime number sets then $(x_p)$ will converge to $c_2$. This conclude that our sequence is not convergent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968754649162292, "perplexity": 61.51386155218272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00661.warc.gz"}
https://mathoverflow.net/questions/248215/cocompletness-of-the-2-category-of-2-rings/248224
# (Co)completness of the 2-category of 2-Rings" Let $2Ring$ denote the 2-category of cocomplete categories with monoidal structures that preserve colimits in each argument. The morphisms are cocontinuous and strong monoidal functors (which are categorified ring homomorphisms). This notion is described in detail in Martin Brandenburg's thesis Tensor categorical foundations of algebraic geometry, arXiv:1410.1716 (pdf) and many authors use a slightly more/less structured notion as a categorification of the category of rings. Is $2Ring$ is (co)complete? I have in mind strict 2-(co)limits but would be interested in other flavors of 2-categorical (co)limits as well. Additionally, I would also like to consider the same question but replacing $2Ring$ with $Comm2Ring$. $Comm2Ring$ is the same as $2Ring$ except with symmetric monoidal structures. What if we impose that $2Ring$ is the 2-category of locally presentable categories with monoidal structures which preserve colimits in each argument? (This is the approach taken in Chrivasitu and Johnson-Freyd's The fundamental pro-groupoid of an affine 2-scheme, arXiv:1105.3104v4 (pdf).) • Do you mean symmetric monoidal here? Also, you haven't specified what functors you want to consider; I think a reasonable choice is cocontinuous and strong monoidal. – Qiaochu Yuan Aug 24 '16 at 23:09 • I'd like to consider plain monoidal along with symmetric monoidal. Those functors are exactly what I wanted. I'll add this into the question. – user84563 Aug 24 '16 at 23:12 • The coproduct of two commutative rings has underlying abelian group the tensor product of the underlying abelian groups. Similarly, one expects that the coproduct of two cocomplete symmetric monoidal categories has as underlying cocomplete category the tensor product of the underlying cocomplete categories. The problem is, however, that it seems to be unknown if every pair of two cocomplete categories admits a tensor product (defined by classifying functors which are cocontinuous in both variables), although this is the case if one of them is locally presentable. – HeinrichD Sep 12 '16 at 8:12 • Similarly, the coequalizer of two cocontinuous symmetric monoidal functors will be a localization of the domain, but (reflective) localizations only really work well in the locally presentable setting, as is explained in Brandenburg's thesis, Section 5.8. It seems to be quite reasonable that the 2-category of locally presentable symmetric monoidal categories (by definition, this means that the tensor product is cocontinuous in both variables) is 2-cocomplete and that locally presentability is crucial here. – HeinrichD Sep 12 '16 at 8:16 I would argue that the "correct" version of (co)limits is the "pseudo" or "bi" or "weak" or "strong" or "homotopy" version. (As far as I can tell, all of these words mean the same thing.) The bicategories of cocomplete or of locally presentable categories are closed under limits: indeed, you can show by hand that the limit just as categories of cocomplete categories along cocontinuous functors is itself cocomplete, and Bird's thesis verifies that if all constituents are locally presentable, then so is the limit. Lurie has shown the same statement in the $\infty$-categorical world. 2-rings, in either Brandenburg's sense or ours, are therefore the symmetric monoidal objects in a complete bicategory. Just as in the 1-categorical case, I believe it is straightforward (if tedious) to show that the limit of underlying categories carries a canonical symmetric monoidal structure, making it the limit of 2-rings. So this handles the "limit" version of your question. The colimit version I believe also works, although you cannot just take the colimit of underlying categories. I haven't thought as much about the bicategory of all cocomplete categories, but the bicategory of locally presentable categories is known, again by Bird's thesis, to contain all colimits. (They are computed by computing instead the limit along right adjoints to the cocontinuous functors you have in mind.) Then you can present a colimit of 2-rings (in our sense) much as you would present a colimit of commutative algebras: any time you see a coproduct, for example, write down a free commutative algebra; any time you see a quotient, do it symmetric monoidally; etc. for the other types of colimits in the bicategorical world. I haven't done the exercise fully myself, but again at least in the locally presentable case I'm sure you can do it (in terms of sketches if necessary). I don't recall off the top of my head whether colimits of arbitrary cocomplete categories exist. I could imagine that you would run into size problems, but I don't know. I should say, there is also a fair amount of work on the bicategorical version of monads, and (co)limits, in all senses (including lax and oplax), of their algebras. I don't recall the references, but my memory is that there are no surprises from the 1-categorical case. • "...are therefore the symmetric monoidal objects in a complete bicategory." So in your sense, this would be the bicategory of presentable categories with the cartesian product? – user84563 Aug 25 '16 at 1:46 • Also do you have references for presenting the colimit of 2-rings and the limits of monoid objects carrying a canonical monoidal structure? – user84563 Aug 25 '16 at 4:46 • @user84563 Ah, herm, you're right, I need to think more. The monoidal structure is the tensor product, so abstract nonsense of monads might not be good enough. Results from Lurie's work are plenty strong enough --- you say the right words about algebras for an operad in a closed monoidal higher category ---, but I don't know of a paper that carefully matches Lurie-style notions of (co)limit with the older version used in the bicategory literature. There might be one, and I think everyone trusts that these do match, I just don't know a paper. – Theo Johnson-Freyd Aug 25 '16 at 20:27 • The long and the short of it is that the answer to your question (at least in the locally presentable version --- I have less intuition in the realm of all cocomplete categories) is Platonically yes, but I don't have papers to point to. I'm confident that I could prove directly that all limits and colimits of 2-rings (in our sense) exist, if I ever needed to. I'm also confident that this is covered by (known) abstract nonsense of $\infty$-operads, modulo (possibly not yet written down?) translations between the bicategory and $\infty$-category worlds. – Theo Johnson-Freyd Aug 25 '16 at 20:30 • Symmertic or braided or plain doesn't matter, since you should just appeal (either formally, by citing Lurie et al, or informally by unpacking the arguments) to facts about limits and colimits of algebras for operads. – Theo Johnson-Freyd Aug 25 '16 at 20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919620513916016, "perplexity": 380.0557840558617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00172.warc.gz"}
https://www.physicsforums.com/threads/divergence-operator-on-the-incompressible-n-s-equation.801437/
# Divergence Operator on the Incompressible N-S Equation Tags: 1. Mar 4, 2015 ### C. C. Hello All, If I apply the Divergence Operator on the incompressible Navier-Stokes equation, I get this equation: $$\nabla ^2P = -\rho \nabla \cdot \left [ V \cdot \nabla V \right ]$$ In 2D cartesian coordinates (x and y), I am supposed to get: $$\nabla ^2P = -\rho \left[ \left( \frac {\partial u} {\partial x} \right) ^2 + 2 \left (\frac{\partial u}{\partial y} \frac{\partial v}{\partial x} \right ) + \left ( \frac {\partial v}{\partial y} \right )^2 \right ]$$ Where does this term come from $$2 \left ( \frac{\partial u}{\partial y} \frac{\partial v}{\partial x} \right )$$? Any guidance would be helpful. Thanks! 2. Mar 5, 2015 ### matteo137 First, I think your starting equation is wrong: inside square brackets you have a scalar, and then you take a dot product? Try with $\nabla^2P = - \rho \nabla \cdot (V \cdot \nabla)V$ 3. Mar 5, 2015 ### Staff: Mentor The thing in brackets in the original post is not a scalar. ∇V is the velocity gradient tensor. The original expression has the same meaning as your representation. Chet 4. Mar 5, 2015 ### matteo137 Thank you! Similar Discussions: Divergence Operator on the Incompressible N-S Equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565588235855103, "perplexity": 2042.3980674744298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105304.35/warc/CC-MAIN-20170819051034-20170819071034-00097.warc.gz"}
https://www.maa.org/press/periodicals/convergence/mathematical-treasure-hadamard-s-calculus-of-variations
# Mathematical Treasure: Hadamard’s Calculus of Variations Author(s): Frank J. Swetz (The Pennsylvania State University) Jacques Hadamard (1865-1963) was a talented and prolific French mathematician who made many contributions to mathematics, particularly in the field of number theory. His Leçons sur le calcul des variations (1910) helped lay the foundations of functional analysis.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140636920928955, "perplexity": 2259.6021662767776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578733077.68/warc/CC-MAIN-20190425193912-20190425215043-00022.warc.gz"}
http://dictionnaire.sensagent.leparisien.fr/Homological%20algebra/en-en/
Publicité ▼ anglais ▼ rechercher ## définition - Homological algebra voir la définition de Wikipedia Wikipedia # Homological algebra Homological algebra is the branch of mathematics which studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert. The development of homological algebra was closely intertwined with the emergence of category theory. By and large, homological algebra is the study of homological functors and the intricate algebraic structures that they entail. One quite useful and ubiquitous concept in mathematics is that of chain complexes, which can be studied both through their homology and cohomology. Homological algebra affords the means to extract information contained in these complexes and present it in the form of homological invariants of rings, modules, topological spaces, and other 'tangible' mathematical objects. A powerful tool for doing this is provided by spectral sequences. From its very origins, homological algebra has played an enormous role in algebraic topology. Its sphere of influence has gradually expanded and presently includes commutative algebra, algebraic geometry, algebraic number theory, representation theory, mathematical physics, operator algebras, complex analysis, and the theory of partial differential equations. K-theory is an independent discipline which draws upon methods of homological algebra, as does the noncommutative geometry of Alain Connes. ## Chain complexes and homology The chain complex is the central notion of homological algebra. It is a sequence $(C_\bullet, d_\bullet)$ of abelian groups and group homomorphisms, with the property that the composition of any two consecutive maps is zero: $C_\bullet: \cdots \longrightarrow C_{n+1} \stackrel{d_{n+1}}{\longrightarrow} C_n \stackrel{d_n}{\longrightarrow} C_{n-1} \stackrel{d_{n-1}}{\longrightarrow} \cdots, \quad d_n \circ d_{n+1}=0.$ The elements of Cn are called n-chains and the homomorphisms dn are called the boundary maps or differentials. The chain groups Cn may be endowed with extra structure; for example, they may be vector spaces or modules over a fixed ring R. The differentials must preserve the extra structure if it exists; for example, they must be linear maps or homomorphisms of R-modules. For notational convenience, restrict attention to abelian groups (more correctly, to the category Ab of abelian groups); a celebrated theorem by Barry Mitchell implies the results will generalize to any abelian category. Every chain complex defines two further sequences of abelian groups, the cycles Zn = Ker dn and the boundaries Bn = Im dn+1, where Ker d and Im d denote the kernel and the image of d. Since the composition of two consecutive boundary maps is zero, these groups are embedded into each other as $B_n \subseteq Z_n \subseteq C_n.$ Subgroups of abelian groups are automatically normal; therefore we can define the nth homology group Hn(C) as the factor group of the n-cycles by the n-boundaries, $H_n(C) = Z_n/B_n = \operatorname{Ker}\, d_n/ \operatorname{Im}\, d_{n+1}.$ A chain complex is called acyclic or an exact sequence if all its homology groups are zero. Chain complexes arise in abundance in algebra and algebraic topology. For example, if X is a topological space then the singular chains Cn(X) are formal linear combinations of continuous maps from the standard n-simplex into X; if K is a simplicial complex then the simplicial chains Cn(K) are formal linear combinations of the n-simplices of X; if A = F/R is a presentation of an abelian group A by generators and relations, where F is a free abelian group spanned by the generators and R is the subgroup of relations, then letting C1(A) = R, C0(A) = F, and Cn(A) = 0 for all other n defines a sequence of abelian groups. In all these cases, there are natural differentials dn making Cn into a chain complex, whose homology reflects the structure of the topological space X, the simplicial complex K, or the abelian group A. In the case of topological spaces, we arrive at the notion of singular homology, which plays a fundamental role in investigating the properties of such spaces, for example, manifolds. On a philosophical level, homological algebra teaches us that certain chain complexes associated with algebraic or geometric objects (topological spaces, simplicial complexes, R-modules) contain a lot of valuable algebraic information about them, with the homology being only the most readily available part. On a technical level, homological algebra provides the tools for manipulating complexes and extracting this information. Here are two general illustrations. • Two objects X and Y are connected by a map f between them. Homological algebra studies the relation, induced by the map f, between chain complexes associated with X and Y and their homology. This is generalized to the case of several objects and maps connecting them. Phrased in the language of category theory, homological algebra studies the functorial properties of various constructions of chain complexes and of the homology of these complexes. • An object X admits multiple descriptions (for example, as a topological space and as a simplicial complex) or the complex $C_\bullet(X)$ is constructed using some 'presentation' of X, which involves non-canonical choices. It is important to know the effect of change in the description of X on chain complexes associated with X. Typically, the complex and its homology $H_\bullet(C)$ are functorial with respect to the presentation; and the homology (although not the complex itself) is actually independent of the presentation chosen, thus it is an invariant of X. ## Functoriality A continuous map of topological spaces gives rise to a homomorphism between their nth homology groups for all n. This basic fact of algebraic topology finds a natural explanation through certain properties of chain complexes. Since it is very common to study several topological spaces simultaneously, in homological algebra one is led to simultaneous consideration of multiple chain complexes. A morphism between two chain complexes, $F: C_\bullet\to D_\bullet$, is a family of homomorphisms of abelian groups Fn:Cn → Dn that commute with the differentials, in the sense that Fn -1 •  dnC = dnD • Fn for all n. A morphism of chain complexes induces a morphism $H_\bullet(F)$ of their homology groups, consisting of the homomorphisms Hn(F): Hn(C) → Hn(D) for all n. A morphism F is called a quasi-isomorphism if it induces an isomorphism on the nth homology for all n. Many constructions of chain complexes arising in algebra and geometry, including singular homology, have the following functoriality property: if two objects X and Y are connected by a map f, then the associated chain complexes are connected by a morphism F = C(f) from $C_\bullet(X)$ to $C_\bullet(Y),$ and moreover, the composition g • f of maps fX → Y and gY → Z induces the morphism C(g • f) from $C_\bullet(X)$ to $C_\bullet(Z)$ that coincides with the composition C(g) • C(f). It follows that the homology groups $H_\bullet(C)$ are functorial as well, so that morphisms between algebraic or topological objects give rise to compatible maps between their homology. The following definition arises from a typical situation in algebra and topology. A triple consisting of three chain complexes $L_\bullet, M_\bullet, N_\bullet$ and two morphisms between them, $f:L_\bullet\to M_\bullet, g: M_\bullet\to N_\bullet,$ is called an exact triple, or a short exact sequence of complexes, and written as $0 \longrightarrow L_\bullet \stackrel{f}{\longrightarrow} M_\bullet \stackrel{g}{\longrightarrow} N_\bullet \longrightarrow 0,$ if for any n, the sequence $0 \longrightarrow L_n \stackrel{f_n}{\longrightarrow} M_n \stackrel{g_n}{\longrightarrow} N_n \longrightarrow 0$ is a short exact sequence of abelian groups. By definition, this means that fn is an injection, gn is a surjection, and Im fn =  Ker gn. One of the most basic theorems of homological algebra, sometimes known as the zig-zag lemma, states that, in this case, there is a long exact sequence in homology $\ldots \longrightarrow H_n(L) \stackrel{H_n(f)}{\longrightarrow} H_n(M) \stackrel{H_n(g)}{\longrightarrow} H_n(N) \stackrel{\delta_n}{\longrightarrow} H_{n-1}(L) \stackrel{H_{n-1}(f)}{\longrightarrow} H_{n-1}(M) \longrightarrow \ldots,$ where the homology groups of L, M, and N cyclically follow each other, and δn are certain homomorphisms determined by f and g, called the connecting homomorphisms. Topological manifestations of this theorem include the Mayer–Vietoris sequence and the long exact sequence for relative homology. ## Foundational aspects Cohomology theories have been defined for many different objects such as topological spaces, sheaves, groups, rings, Lie algebras, and C*-algebras. The study of modern algebraic geometry would be almost unthinkable without sheaf cohomology. Central to homological algebra is the notion of exact sequence; these can be used to perform actual calculations. A classical tool of homological algebra is that of derived functor; the most basic examples are functors Ext and Tor. With a diverse set of applications in mind, it was natural to try to put the whole subject on a uniform basis. There were several attempts before the subject settled down. An approximate history can be stated as follows: These move from computability to generality. The computational sledgehammer par excellence is the spectral sequence; these are essential in the Cartan-Eilenberg and Tohoku approaches where they are needed, for instance, to compute the derived functors of a composition of two functors. Spectral sequences are less essential in the derived category approach, but still play a role whenever concrete computations are necessary. There have been attempts at 'non-commutative' theories which extend first cohomology as torsors (important in Galois cohomology). ## References • Henri Cartan, Samuel Eilenberg, Homological algebra. With an appendix by David A. Buchsbaum. Reprint of the 1956 original. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1999. xvi+390 pp. ISBN 0-691-04991-2 • Alexander Grothendieck, Sur quelques points d'algèbre homologique. Tôhoku Math. J. (2) 9, 1957, 119--221 • Saunders Mac Lane, Homology. Reprint of the 1975 edition. Classics in Mathematics. Springer-Verlag, Berlin, 1995. x+422 pp. ISBN 3-540-58662-8 • Peter Hilton; Stammbach, U. A course in homological algebra. Second edition. Graduate Texts in Mathematics, 4. Springer-Verlag, New York, 1997. xii+364 pp. ISBN 0-387-94823-6 • Gelfand, Sergei I.; Yuri Manin, Methods of homological algebra. Translated from Russian 1988 edition. Second edition. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2003. xx+372 pp. ISBN 3-540-43583-2 • Gelfand, Sergei I.; Yuri Manin, Homological algebra. Translated from the 1989 Russian original by the authors. Reprint of the original English edition from the series Encyclopaedia of Mathematical Sciences (Algebra, V, Encyclopaedia Math. Sci., 38, Springer, Berlin, 1994). Springer-Verlag, Berlin, 1999. iv+222 pp. ISBN 3-540-65378-3 • Weibel, Charles A. (1994), An introduction to homological algebra, Cambridge Studies in Advanced Mathematics, 38, Cambridge University Press, ISBN 978-0-521-55987-4, OCLC 36131259, MR1269324 Publicité ▼ Contenu de sensagent • définitions • synonymes • antonymes • encyclopédie • definition • synonym Publicité ▼ dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Essayer ici, télécharger le code; Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○   Anagrammes ○   jokers, mots-croisés ○   Lettris ○   Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. 3554 visiteurs en ligne calculé en 0,047s Je voudrais signaler : section : une faute d'orthographe ou de grammaire un contenu abusif (raciste, pornographique, diffamatoire) une violation de copyright une erreur un manque autre merci de préciser : allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8795349597930908, "perplexity": 1783.1995393452578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141185851.16/warc/CC-MAIN-20201126001926-20201126031926-00583.warc.gz"}
http://cerco.cs.unibo.it/changeset/3358/Papers/itp-2013
# Changeset 3358 for Papers/itp-2013 Ignore: Timestamp: Jun 14, 2013, 4:43:46 PM (7 years ago) Message: ... Location: Papers/itp-2013 Files: 2 edited ### Legend: Unmodified r3357 Let's consider a generic unstructured language already equipped with a small step structured operational semantics (SOS). We introduce a deterministic labelled transition system~\cite{LTS} $(S,\Lambda,\to)$ deterministic labelled transition system $(S,\Lambda,\to)$ that refines the SOS by observing function calls/returns and the beginning of basic blocks. We achieve this by independently proving the three properties for each compiler pass. The first property is standard and can be proved by means of a forward simulation argument (see for example~\cite{compcert}) that runs like this. The first property is standard and can be proved by means of a forward simulation argument (see for example~\cite{compcert3}) that runs like this. First a relation between the corresponding source and target states is defined. Then a lemma establishes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881579279899597, "perplexity": 943.8556980922997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00649.warc.gz"}
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_Chem1_(Lower)/15%3A_Thermodynamics_of_Chemical_Equilibria/15.05%3A_Thermodynamics_of_Mixing_and_Dilution
# 15.5: Thermodynamics of Mixing and Dilution $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ The free energy of a pure liquid or solid at 1 atm pressure is just its molar free energy of formation ΔG multiplied by the number of moles present. For gases and substances in solution, we have to take into account the concentration (which, in the case of gases, is normally expressed in terms of the pressure). We know that the lower the concentration, the greater the entropy, and thus the smaller the free energy. The following excerpt from this lesson serves as the starting point for the rest of the present lesson. ## Entropy Depends on Concentration As a substance becomes more dispersed in space, the thermal energy it carries is also spread over a larger volume, leading to an increase in its entropy. Because entropy, like energy, is an extensive property, a dilute solution of a given substance may well possess a smaller entropy than the same volume of a more concentrated solution, but the entropy per mole of solute (the molar entropy) will of course always increase as the solution becomes more dilute. For gaseous substances, the volume and pressure are respectively direct and inverse measures of concentration. For an ideal gas that expands at a constant temperature (meaning that it absorbs heat from the surroundings to compensate for the work it does during the expansion), the increase in entropy is given by $\Delta S = R\ln \left( \dfrac{V_2}{V_1} \right) \label{2-4}$ (If the gas is allowed to cool during the expansion, the relation becomes more complicated and will best be discussed in a more advanced course.) Because the pressure of a gas is inversely proportional to its volume, we can easily alter the above relation to express the entropy change associated with a change in the pressure of a perfect gas: $\Delta S = R\ln \left( \dfrac{P_1}{P_2} \right) \label{2-5}$ Expressing the entropy change directly in concentrations $$c$$, we have the similar relation $\Delta S = R\ln \left( \dfrac{C_1}{C_2} \right) \label{2-6}$ Although these equations strictly apply only to perfect gases and cannot be used at all for liquids and solids, it turns out that in a dilute solution, the solute can often be treated as a gas dispersed in the volume of the solution, so the last equation can actually give a fairly accurate value for the entropy of dilution of a solution. We will see later that this has important consequences in determining the equilibrium concentrations in a homogeneous reaction mixture. ## The Gibbs energy of a Gas: standard states The pressure of a perfect gas does not affect its enthalpy, but it does affect the entropy (box at left), and thus, through the –TΔS term, the free energy. When the pressure of such a gas changes from $$P_1$$ to $$P_2$$, the Gibbs energy change is $\Delta G = \Delta H - T \Delta S = 0 - RT \ln \left( \dfrac{P_1}{P_2} \right) \label{4.8}$ How can we evaluate the free energy of a specific sample of a gas at some arbitrary pressure? First, recall that the standard molar free energy G° that you would look up in a table refers to a pressure of 1 atm. The free energy per mole of our sample is just the sum of this value and any change in free energy that would occur if the pressure were changed from 1 atm to the pressure of interest $G = G^o + RT \ln \left( \dfrac{P_1}{1\; atm} \right) \label{4.9}$ which we normally write in abbreviated form $G = G^o + RT \ln P \label{4-10}$ ##### Escaping Tendency The term escaping tendency is not commonly used in traditional thermodynamics because it is essentially synonymous with the free energy, but it is worth knowing because it helps us appreciate the physical significance of free energy in certain contexts. The higher the pressure of a gas, the greater will be the tendency of its molecules to leave the confines of the container; we will call this the escaping tendency. The above equation tells us that the pressure of a gas is a directly observable measure of its free energy (G, not G°!). Combining these two ideas, we can say that the free energy of a gas is also a measure of its escaping tendency. ## From Gases to Solutions: Mixing and Dilution All substances, given the opportunity to form a homogeneous mixture with other substances, will tend to become more dilute. This can be rationalized simply from elementary statistics; there are more equally probable ways of arranging one hundred black marbles and one hundred white marbles, than two hundred marbles of a single color. For massive objects like marbles this has nothing to do with entropy, of course. However, when we are dealing with huge numbers of molecules capable of storing, exchanging and spreading thermal energy, mixing and expansion are definitely entropy-driven processes. It can be argued, in fact, that mixing and expansion are really very similar; after all, when we mix two gases, each is expanding into the space formerly occupied exclusively by the other. Suppose, for example, that we have a gas initially confined to one half of a box, and we then remove the barrier so that it can expand into the full volume of the container. We know that the entropy of the gas will increase as the thermal energy of its molecules spreads into the enlarged space; the actual increase, according to Equations $$\ref{2-4}$$ and $$\ref{2-5}$$ above, is $$R \ln 2$$. And from Equation $$\ref{4-10}$$, the change in $$G$$ will be $$–RT \ln 2$$. Now let us repeat the experiment, but starting this time with "red" molecules in one half of the container and "blue" ones in the right half. Because we now have two gases expanding into twice their initial volumes, the changes in S and G will be twice as great: $ΔS = 2 R \ln 2$ $ΔG = –2 RT \ln 2$ However, notice that although each gas underwent an expansion, the overall process in this case is equivalent to mixing. What is true for gaseous molecules can, in principle, apply also to solute molecules dissolved in a solvent. An important qualification here is that the solution must be an ideal one, meaning that the strength of interactions between all type of molecules (solutes A and B, and the solvent) must be the same. Remember that the enthalpy associated with the expansion of a perfect gas is by definition zero. In contrast, the $$ΔH_{mixing}$$ of two liquids or of dissolving a solute in a solvent have finite values which may limit the miscibility of liquids or the solubility of a solute. Given this proviso, we can define the Gibbs energy of dilution or mixing by substituting this equation into the definition of ΔG: $\Delta G_{dilution} = \Delta H_{dilution} - RT\ln \left( \dfrac{C_1}{C_2} \right) \label{4-11}$ If the substance in question forms an ideal solution with the other components, then $$ΔH_{diution}$$ is zero, and we can write $\Delta G_{dilution} = RT\ln \left( \dfrac{C_2}{C_1} \right) \label{4-12}$ These relations tell us that the dilution of a substance from an initial concentration $$C_1$$ to a more dilute concentration $$C_2$$ is accompanied by a decrease in the free energy, and thus will occur spontaneously. By the same token, the spontaneous “un-dilution” of a solution will not occur (we do not expect the tea to diffuse back into the tea bag!) However, un-dilution can be forced to occur if some means can be found to supply to the system an amount of energy (in the form of work) equal to $$\Delta G_{dilution}$$. An important practical example of this is the metabolic work performed by the kidneys in concentrating substances from the blood for excretion in the urine. To find the Gibbs energy of a solute at some arbitrary concentration, we proceed in very much the same way as we did for a gas: we take the sum of the standard free energy plus any change in the free energy that would accompany a change in concentration from the standard state to the actual state of the solution. From Equation $$\ref{4-12}$$ it is easy to derive an expression analogous to Equation $$\ref{4-10}$$: $G = G^o + RT \ln C \label{4-13}$ which gives the free energy of a solute at some arbitrary concentration $$C$$ in terms of its value $$G^o$$ in its standard state. Although Equation $$\ref{4-13}$$ has the same simple form as Equation $$\ref{4-10}$$, its practical application is fraught with difficulties, the major one being that it does not usually give values of $$G$$ that are consistent with experiment, especially for solutes that are ionic or are slightly soluble. This is because most solutions (especially those containing dissolved ions) are far from ideal; intermolecular interactions between solute molecules and between solute and solvent bring back the enthalpy term that we left out in deriving Equation $$\ref{4-12}$$. In addition, the structural organization of the solution becomes concentration dependent, so that the entropy depends on concentration in a more complicated way than is implied by the concentration analog of Equation $$\ref{4-12}$$. ### Chemical Reactions and Mixing We characterize the tendency for a chemical reaction A → B to occur at constant temperature and pressure by the value of its standard Gibbs energy change ΔG°. If this quantity is negative, we know that the reaction will take place spontaneously. However, have you ever wondered why it is that substance A is not completely transformed into B if the latter is thermodynamically more stable? The answer is that if the reaction takes place in a single phase (gas or liquid), something else is going on: A and B are mixing together, and this process creates its own Gibbs energy change $$ΔG_{mixing}$$. For a simple binary mixture of A and B (without any reaction), the changes in $$S$$ and $$G$$ can be represented by these simple plots: We will not try to prove it here, but it turns out that no matter how much lower the Gibbs energy of the products compared to that of the reactants, the free energy of the system can always be reduced even more if some of the reactants remain in the solution to contribute a ΔGmixing term. This is the reason that a plot of G as a function of the composition of such a system has a minimum at some point short of complete conversion. Diffusion refers to the transport of a substance across a concentration gradient. The direction is always toward the region of lower concentration. You should now see that from a thermodynamic standpoint, these processes are identical in that they both represent the spontaneous "escape" of molecules from a region of higher concentration (lower entropy, higher Gibbs energy) to a region of lower concentration. ##### Activity and Standard State of the Solute Instead of complicating G° by trying to correct for all of these effects, chemists have chosen to retain its simple form by making a single small change in the form of $$\ref{4-13}$$: $G = G^o + RT \ln a \label{4-14}$ This equation is guaranteed to work, because a, the activity of the solute, is its thermodynamically effective concentration. The relation between the activity and the concentration is given by $a = \gamma c \label{4-15}$ where $$\gamma$$ is the activity coefficient. As the solution becomes more dilute, the activity coefficient approaches unity: $\lim _{c \rightarrow 0} \gamma =1 \label{4.16}$ The price we pay for this simplicity is that the relation between the concentration and the activity at higher concentrations can be quite complicated, and must be determined experimentally for every different solution. The question of what standard state we choose for the solute (that is, at what concentration is G° defined, and in what units is it expressed?) is one that you will wish you had never asked. We might be tempted to use a concentration of 1 molar, but a solution this concentrated would be subject to all kinds of intermolecular interaction effects, and would not make a very practical standard state. These effects could be eliminated by going to the opposite extreme of an “infinitely dilute” solution, but by Equation $$\ref{4-12}$$ this would imply a free energy of minus infinity for the solute, which would be awkward. Chemists have therefore agreed to define the standard state of a solute as one in which the concentration is 1 molar, but all solute-solute interactions are magically switched off, so that $$\gamma$$ is effectively unity. Since this is impossible, no solution corresponding to this standard state can actually exist, but this turns out to be only a small drawback, and seems to be the best compromise between convenience, utility, and reality. This page titled 15.5: Thermodynamics of Mixing and Dilution is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043046832084656, "perplexity": 333.1887504270585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00288.warc.gz"}
http://mathhelpforum.com/algebra/26556-find-arithmetic-progression.html
# Thread: Find the arithmetic progression 1. ## Find the arithmetic progression 1. Find the arithmetic progression where t5 = 17 and t12 = 52 2. Find t6 of an arithmetic sequence given that t3 = 5.6 and t12 = 7 3. Find the 7th term of the arithmetic progression whose 5th term is m and whose 11th term is n. 4. Find the value of p so that p + 5, 4p + 3, 8p -2 will form successive terms of an arithmetic progression 2. Hello, nerdzor! This should get you started . . . 1. Find the arithmetic progression where: $t_5 = 17\text{ and }t_{12} = 52$ $\text{The }n^{th}\text{ term is: }\;t_n \;=\;t_1 + (n-d)d$ . . $\text{where }t_1\text{ is the first term and }d\text{ is the common difference.}$ We have: . $\begin{array}{cccccc}t_5 & = & t_1 + 4d &^ = & 17 & [1] \\ t_{12} &=& t_1 + 11d & = & 52 & [2] \end{array}$ Subtract [1] from [2]: . $7d \:=\:35\quad\Rightarrow\quad\boxed{ d \:=\:5}$ Substitute into [1]: . $t_1 + 4(5) \:=\:17\quad\Rightarrow\quad\boxed{ t_1 \:=\:-3}$ The progression is: . $-3,\:2,\:7,\:12,\:17,\:22,\:27,\:32.\:37.\:42.\:47, \;52,\:\cdots$ 4. Find the value of $p$ so that: $p + 5,\;4p + 3,\;8p -2$ are successive terms of an arithmetic progression. Consecutive terms will have a common difference. Hence: . $(4p+3) - (p + 5) \;=\;(8p-2) - (4p+3)$ . . And solve for $p\!:\;\;\boxed{p \:=\:3}$ [The terms are: . $8,\:15,\text{ and }22$.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949645161628723, "perplexity": 1344.2881036790484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296931.2/warc/CC-MAIN-20160823195816-00060-ip-10-153-172-175.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/41139/definition-of-quasi-static-assumption/41235
# Definition of quasi-static assumption I'm reading this paper, where atop pg 3, the author mentions the "quasi-static" assumption for earth displacements which leads to the mechanical deformation of a fluid-solid system to be governed by: $$\nabla \sigma + \rho_h g = 0$$. What exactly does "quasi-static" mean in this context? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914237916469574, "perplexity": 3555.7833425765057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888972.38/warc/CC-MAIN-20140722025808-00032-ip-10-33-131-23.ec2.internal.warc.gz"}