url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.jiskha.com/questions/1508465/hey-can-someone-help-me-with-these-i-need-help-consider-the-curve-given-by-x2-sin-xy
# calculus hey can someone help me with these i need help Consider the curve given by x2 + sin(xy) + 3y2 = C, where C is a constant. The point (1, 1) lies on this curve. Use the tangent line approximation to approximate the y-coordinate when x = 1.01. Question 19 options: 0.996 1 1.004 Cannot be determined 1.388 The volume of an open-topped box with a square base is 245 cubic centimeters. Find the height, in centimeters, of the box that uses the least amount of material. 7.883 centimeters 6 centimeters 3.942 centimeters 3 centimeters 2 centimeters 1. 👍 0 2. 👎 0 3. 👁 1,184 1. x2 + sin(xy) + 3y2 = C solve for dy/dx 2x+cos(xy)xdy/dx+6ydy/dx=0 dy = -2x/(cos(xy)x+6y) * dx dy= -2*1/(cos(1*1) + 6) dx but dx= .001 then dy= -.02/(.540+6)= -0.00305810398 so y= 1-dy= 0.996941896 1. 👍 2 2. 👎 1 ## Similar Questions 1. ### Calculus 1 The curve y = |x|/(sqrt(5- x^2)) is called a bullet-nose curve. Find an equation of the tangent line to this curve at the point (2, 2) Given the curve x^2-xy+y^2=9 A) write a general expression for the slope of the curve. B) find the coordinates of the points on the curve where the tangents are vertical C) at the point (0,3) find the rate of change in the slope 3. ### Physics A v vs t graph is drawn for a ball moving in one direction. The graph starts at the origin and at t=5s the acceleration of the ball is zero. We know that at t=5s, a) the curve is not crossing the time axis b) the velocity of the 4. ### calculus 1. Given the curve a. Find an expression for the slope of the curve at any point (x, y) on the curve. b. Write an equation for the line tangent to the curve at the point (2, 1) c. Find the coordinates of all other points on this 1. ### calculus Notice that the curve given by the parametric equations x=25−t^2 y=t^3−16t is symmetric about the x-axis. (If t gives us the point (x,y),then −t will give (x,−y)). At which x value is the tangent to this curve horizontal? 2. ### Calculus Consider the curve given by y^2 = 2+xy (a) show that dy/dx= y/(2y-x) (b) Find all points (x,y) on the curve where the line tangent to the curve has slope 1/2. (c) Show that there are now points (x,y) on the curve where the line 3. ### last calc question, i promise! given the curve x + xy + 2y^2 = 6... a. find an expression for the slope of the curve. i got (-1-y)/(x + 4y) as my answer. b. write an equation for the line tangent to the curve at the point (2,1). i got y = (-1/3)x + (5/3). but i 4. ### Calculus The slope of the tangent line to a curve at any point (x, y) on the curve is x divided by y. What is the equation of the curve if (2, 1) is a point on the curve? 1. ### Calculus A curve is defined by the parametric equations: x = t2 – t and y = t3 – 3t Find the coordinates of the point(s) on the curve for which the normal to the curve is parallel to the y-axis. You must use calculus and clearly show 2. ### Calculus There is a diagram with the curve y=(2x-5)^4.The point P has coordinates (4,81) and the tangent to the curve at P meets the x-axis at Q.Find the area of the region enclosed between curve ,PQ and the x-axis. 3. ### physics A car is traveling at a constant speed along road ABCDE. (Curve BC where Bis beginning of curve and C is the end of the curve is in the shape of a "backward C." C is then the beginning of another curve and D is the end of the same 4. ### Calc The slope of the tangent line to a curve is given by f'(x)=4x^2 + 7x -9. If the point ​(0,6​) is on the​ curve, find an equation of the curve.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073728919029236, "perplexity": 614.3091992110434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00600.warc.gz"}
https://math.stackexchange.com/questions/3269825/reduced-multiplicative-residue-modulo-p?noredirect=1
# Reduced multiplicative residue modulo p [duplicate] I would like if someone could provide or show me where I can find the proof that $$\mathbb{Z} _n^*$$ is cyclic when n is prime. In particular I'm after a simple proof that involves fermats little theorem. Would appreciate it best regards. • Not a duplicate as I'm looking for a specific and less general result. In particular trying to avoid theory I do not know. – Dead_Ling0 Jun 21 '19 at 16:05 • Well, at some point you'll need to use the fact that, $\pmod p$, a non-zero polynomial of degree $d$ can have no more than $d$ roots. That follows instantly from the fact that the integers, $\pmod p$ form a field. I don't think you'll find a short cut round that fact. – lulu Jun 21 '19 at 16:12 • @lulu "instantly" is a bit of a stretch (at this level), but it does follow easily by inductively applying the Factor Theoem e.g. here, or by using $\,x-a\,$ is prime over a domain. $\ \$ – Bill Dubuque Jun 21 '19 at 16:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8063766360282898, "perplexity": 235.04769999645762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00476.warc.gz"}
http://mathsvideos.net/tag/even-numbers/
# How to add up all the even numbers from 0 onwards quickly In this post, I’ll be demonstrating how you can add up all the even numbers from 0 onwards. Adding up all the even numbers from 0 to 2: In this diagram, we are going to say that n=2. The height of the rectangle is (n+2) and its length is n/2. This means that the area shaded in red, which is in fact equal to all the even numbers from 0 to 2 added up, is: $\left\{ \left( n+2 \right) \cdot \frac { n }{ 2 } \right\} \cdot \frac { 1 }{ 2 } \\ \\ =\frac { n\left( n+2 \right) }{ 4 }$ Adding up all the even numbers from 0 to 4: In this diagram, we are going to say that n=4. The height of the rectangle is (n+2) and its length is n/2. This means that the area shaded in red, which is in fact equal to all the even numbers from 0 to 4 added up, is: $\left\{ \left( n+2 \right) \cdot \frac { n }{ 2 } \right\} \cdot \frac { 1 }{ 2 } \\ \\ =\frac { n\left( n+2 \right) }{ 4 }$ Adding up all the even numbers from 0 to 6: In this diagram, we are going to say that n=6. The height of the rectangle is (n+2) and its length is n/2. This means that the area shaded in red, which is in fact equal to all the even numbers from 0 to 6 added up, is: $\left\{ \left( n+2 \right) \cdot \frac { n }{ 2 } \right\} \cdot \frac { 1 }{ 2 } \\ \\ =\frac { n\left( n+2 \right) }{ 4 }$ Adding up all the even numbers from 0 to 8: In this diagram, we are going to say that n=8. The height of the rectangle is (n+2) and its length is n/2. This means that the area shaded in red, which is in fact equal to all the even numbers from 0 to 8 added up, is: $\left\{ \left( n+2 \right) \cdot \frac { n }{ 2 } \right\} \cdot \frac { 1 }{ 2 } \\ \\ =\frac { n\left( n+2 \right) }{ 4 }$ What we’ve discovered: We’ve discovered that a simple formula can be used to add up all the even numbers from 0 to “n”, whereby “n” is an even number. This formula is: $\left\{ \left( n+2 \right) \cdot \frac { n }{ 2 } \right\} \cdot \frac { 1 }{ 2 } \\ \\ =\frac { n\left( n+2 \right) }{ 4 }$ Alternative method: There is also an alternative formula you can use to add up even numbers, from 0 onwards. That is: # Multiplying Even and Odd Numbers Today I’m going to be showing you what would happen if you were to multiply: a) An even number by an even number; b) An odd number by an odd number; c) An even number by an odd number. Firstly, let us define what an even number is: An even number can be described using the expression $2n$, whereby (n) would be a whole number ranging from 0 upwards. Next, let us define what an odd number is: An odd number can be described using the expression $2n+1$, and similarly (as is the case with even numbers), (n) would be a whole number ranging from 0 upwards. Now, since we’ve defined how both even numbers and odd numbers can be described in terms of mathematical expressions, let’s focus our attention on multiplying even numbers by even numbers, odd numbers by odd numbers and even numbers by odd numbers… Multiplying even numbers by even numbers: Let’s produce two whole numbers which could be equal to one another or not equal to one another… Let’s call these numbers ${ n }_{ 1 }$ and ${ n }_{ 2 }$. Using these two whole numbers we can multiply two unknown even numbers by each other in such a manner: $2{ n }_{ 1 }\cdot 2{ n }_{ 2 }$ This would invariably give us the result $4{ n }_{ 1 }{ n }_{ 2 }=2\cdot 2{ n }_{ 1 }{ n }_{ 2 }$. Now, the product of ${ n }_{ 1 }{ \cdot n }_{ 2 }$ would be a whole number and since this is the case, you would have to say that an even number multiplied by an even number would produce an even number. Let’s not forget that even numbers are multiples of 2. Multiplying odd numbers by odd numbers: Once again, let’s come up with two whole numbers which could be equal to one another or not equal to one another… These whole numbers will be ${ n }_{ 3 }$ and ${ n }_{ 4 }$. This would mean that two odd numbers being multiplied by one another would produce an expression as such: $\left( 2{ n }_{ 3 }+1 \right) \left( 2{ n }_{ 4 }+1 \right)$ And if we expand the expression above, we’ll get: $4{ n }_{ 3 }{ n }_{ 4 }+2{ n }_{ 3 }+2{ n }_{ 4 }+1$ Now if we re-arrange the expression above, we can get: $2\left( 2{ n }_{ 3 }{ n }_{ 4 }+{ n }_{ 3 }+{ n }_{ 4 } \right) +1$ Since the expression $2{ n }_{ 3 }{ n }_{ 4 }+{ n }_{ 3 }+{ n }_{ 4 }$ must be a whole number, you would be forced to conclude that an odd number multiplied by an odd number would produce an odd number. Multiplying even numbers by odd numbers: We will for the last time come up with two whole numbers ${ n }_{ 5 }$ and ${ n }_{ 6 }$. An even number and an odd number being multiplied by one another could be shown using the mathematical expression below: $2{ n }_{ 5 }\cdot \left( 2{ n }_{ 6 }+1 \right)$ Lazily, we could conclude that an even number multiplied by an odd number would produce an even number. This is because even numbers are all multiples of 2. Ok… So, let’s summarise what we’ve discovered: i) An even number multiplied by an even number would produce an even number; ii) An odd number multiplied by an odd number would produce an odd number; iii) An even number multiplied by an odd number would produce an even number. Knowing this we can further strengthen our mathematical reasoning. 🙂
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970109820365906, "perplexity": 210.96466469927418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00469.warc.gz"}
http://math.stackexchange.com/questions/67969/linear-diophantine-equation-100x-23y-19
# Linear diophantine equation $100x - 23y = -19$ I need help with this equation: $$100x - 23y = -19.$$ When I plug this into Wolfram|Alpha, one of the integer solutions is $x = 23n + 12$ where $n$ is a subset of all the integers, but I can't seem to figure out how they got to that answer. - If you type in ExtendedGCD[100, 23], Wolfram Alpha will respond $\{1, \{3, -13\}\}$ Which means that the GCD of $100$ and $23$ is $1$ and that $3(100)-13(23) = 1$ Multiply both sides by -19 and you get a particular solution. Don't abuse this function! You still need to learn how to do it correctly. – Steven Gregory May 15 '15 at 21:29 $100x -23y = -19$ if and only if $23y = 100x+19$, if and only if $100x+19$ is divisible by $23$. Using modular arithmetic, you have \begin{align*} 100x + 19\equiv 0\pmod{23}&\Longleftrightarrow 100x\equiv -19\pmod{23}\\ &\Longleftrightarrow 8x \equiv 4\pmod{23}\\ &\Longleftrightarrow 2x\equiv 1\pmod{23}\\ &\Longleftrightarrow x\equiv 12\pmod{23}. \end{align*} so $x=12+23n$ for some integer $n$. - Wow, thanks for the fast response! I followed you all the way up until '100x = -19 (mod 23)' went to '8x = 4 (mod 23)'. Can you explain that a little more? – Mike Sep 27 '11 at 18:11 $100=4\cdot 23+8$ and $-19=4+(-1)\cdot 23$ – Ross Millikan Sep 27 '11 at 18:18 Using the Euclid-Wallis algorithm (described below) $$\begin{array}{r} &&4&2&1&7\\\hline 1&0&1&-2&3&-23\\ 0&1&-4&9&-13&100\\ 100&23&8&7&1&0\\ &&&&{\uparrow}&{\star} \end{array}$$ lookng below the horizotal line, the top row times $100$ plus the middle row times $23$ equals the bottom row. Therefore, the arrow column says that $$3\cdot100-13\cdot23=1$$ Furthermore, the top and middle numbers in each column are relatively prime. Therefore, the star column gives the smallest combination of $100$ and $23$ that equals $0$. Add arbitrary multiples of the star column to a particular multiple of the arrow column to get all solutions for a particular problem. Since we want a result of $-19$, add arbitrary multiples of the star column to $-19$ times the arrow column: \begin{align} -19&=(-19\cdot3-23k)100+(-19\cdot-13+100k)23\\ &=(-57-23k)100+(247+100k)23 \end{align} This gives all the integer solutions. Thus, $x=-57-23k=12-23(k+3)=12+23n$ where $n=-k-3$. Euclid-Wallis Algorithm This algorithm computes $\gcd(m,n)$, solves the Diophantine equation $mx + ny = \gcd(m,n)$, and yields the continued fraction for $m/n$. Start with the two columns $$\begin{array}{r} 1&0\\ 0&1\\ m&n\\ \end{array}\tag{1}$$ Above each successive column write down the floor of the quotient of the base of the previous column divided into the base of the column before that, then compute the next column by subtracting that number times the previous column from the column before that. Let us work an example; $m = 17, n = 23$: $$\newcommand{\nextq}[2]{\leftarrow\lfloor\color{##00A000}{#1}/\color{##0000FF}{#2}\rfloor} \newcommand{\euclid}[3]{{\leftarrow}\color{##00A000}{#1}-\color{orange}{#3}\cdot\color{##0000FF}{#2}} \begin{array}{rcl} \begin{array}{l} \color{#C00000}{\text{Above each successive column}}\\ \text{write down the floor of the}\\ \text{quotient of }\color{#0000FF}{\text{the base of the}}\\ \color{#0000FF}{\text{previous column}}\text{ divided into }\color{#00A000}{\text{the}}\\ \color{#00A000}{\text{base of the column before that}} \end{array} && \begin{array}{l} \text{Compute }\color{#C00000}{\text{the next column}}\text{ by}\\ \text{subtracting }\color{orange}{\text{that number}}\text{ times}\\ \color{#0000FF}{\text{the previous column}}\text{ from }\color{#00A000}{\text{the}}\\ \color{#00A000}{\text{column before that}}\\ \text{} \end{array}\\ \begin{array}{rrrl} & & \color{#C00000}{0} & \nextq{17}{23}\\ \hline 1 & 0 \\ 0 & 1 \\ \color{#00A000}{17} & \color{#0000FF}{23} \end{array} &\rightarrow& \begin{array}{rrrl} & & \color{orange}{0}\\ \hline \color{#00A000}{1} & \color{#0000FF}{0} & \color{#C00000}{1} & \euclid{1}{0}{0} \\ \color{#00A000}{0} & \color{#0000FF}{1} & \color{#C00000}{0} & \euclid{0}{1}{0} \\ \color{#00A000}{17} & \color{#0000FF}{23} & \color{#C00000}{17} & \euclid{17}{23}{0} \end{array}\\ &\swarrow&\\ \begin{array}{rrrrl} & & 0 & \color{#C00000}{1} & \nextq{23}{17} \\ \hline 1 & 0 & 1 \\ 0 & 1 & 0 \\ 17 & \color{#00A000}{23} & \color{#0000FF}{17} \end{array} &\rightarrow& \begin{array}{rrrrl} & & 0 & \color{orange}{1} \\ \hline 1 & \color{#00A000}{0} & \color{#0000FF}{1} & \color{#C00000}{-1} & \euclid{0}{1}{1} \\ 0 & \color{#00A000}{1} & \color{#0000FF}{0} & \color{#C00000}{1} & \euclid{1}{0}{1} \\ 17 & \color{#00A000}{23} & \color{#0000FF}{17} & \color{#C00000}{6} & \euclid{23}{17}{1} \end{array}\\ &\swarrow&\\ \begin{array}{rrrrrl} & & 0 & 1 & \color{#C00000}{2} &\nextq{17}{6} \\ \hline 1 & 0 & 1 & -1 \\ 0 & 1 & 0 & 1 \\ 17 & 23 & \color{#00A000}{17} & \color{#0000FF}{6} \end{array} &\rightarrow& \begin{array}{rrrrrl} & & 0 & 1 & \color{orange}{2} \\ \hline 1 & 0 & \color{#00A000}{1} & \color{#0000FF}{-1} & \color{#C00000}{3} & \euclid{1}{-1}{2} \\ 0 & 1 & \color{#00A000}{0} & \color{#0000FF}{1} & \color{#C00000}{-2} & \euclid{0}{1}{2} \\ 17 & 23 & \color{#00A000}{17} & \color{#0000FF}{6} & \color{#C00000}{5} & \euclid{17}{6}{2} \end{array} \end{array}\\ \vdots$$ $$\begin{array}{rrrrrrr} &&\color{orange}{0}&\color{orange}{1}&\color{orange}{2}&\color{orange}{1}&\color{orange}{5}\\ \hline \color{#00A000}{1}&\color{#00A000}{0}& 1&-1& 3&\color{#C00000}{-4}&\color{#0000FF}{23}\\ \color{#00A000}{0}& \color{#00A000}{1}& 0&1&-2&\color{#C00000}{3}&\color{#0000FF}{-17}\\ \color{#00A000}{17}&\color{#00A000}{23}&17& 6& 5& \color{#C00000}{1}&\color{#0000FF}{0} \end{array}\tag{2}$$ The red column (preceding the blue one with $0$ at its base) has $\gcd(m,n)$ at its base. Also, $m$ times the top row plus $n$ times the middle row equals the bottom row. Thus, the red column (with $\gcd(m,n)$ at its base) has the coefficients for $mx + ny = \gcd(m,n)$ in the top two rows. Also, the blue column (with $0$ at its base) has the smallest non-zero solution to $mx + ny = 0$. Multiples of this solution can be added to a particular solution of $mx + ny = k$ to get all solutions. The orange quotients above the columns yield the continued fraction for $m/n$. Thus, $$\gcd(\color{#00A000}{17},\color{#00A000}{23}) = \color{#C00000}{1}\\ (\color{#C00000}{-4}\color{#0000FF}{+23}k)\color{#00A000}{17} + (\color{#C00000}{3}\color{#0000FF}{-17}k)\color{#00A000}{23} = \color{#C00000}{1}$$ and the continued fraction for $\color{#00A000}{17}/\color{#00A000}{23}$ is $[\color{orange}{0},\color{orange}{1},\color{orange}{2},\color{orange}{1},\color{orange}{5}]$. Euclidean Algorithm (plus bookkeeping) The algorithm above is simply the Euclidean algorithm in the top and bottom rows, with some bookkeeping in the two middle rows. The quotients are in the top row and the remainders (which, as dictated by the Euclidean Algorithm, later become divisors and then dividends) are in the bottom row. In $(2)$, the first dividend is $17$ and the first divisor is $23$. The first quotient is $0$ and the remainder is $17$. In the next pass, the dividend is $23$ (which was the previous divisor) and the divisor is $17$ (which was the previous remainder). The second quotient is $1$ and the remainder is $6$. In the third pass the dividend is $17$ and the divisor is $6$, which yields a quotient of $2$ and a remainder of $5$. This proceeds until a remainder of $0$ appears. The algorithm has to stop here since the next divisor would be $0$. The bookkeeping in the middle two rows mimics the computation performed in the bottom row. That is, below the line, the third column is first column minus $0$ times the second column. The fourth column is the second column minus $1$ times the third column. The fifth column is the third column minus two times the fourth column. This bookkeeping assures that the first row below the line times $17$ plus the second row times $23$ equals the bottom row. This allows us to back out the Euclidean algorithm at the same time we are performing it. - Rob, this is an awesome reference. Thank you. – Pedro Tamaroff May 17 '13 at 17:45 See here for a natural viewpoint as elimination by row-reduction. This is a special case of general methods (Hermite / Smith) for triangularizing or diagonalizing matrices to normal form over Euclidean domains. – Bill Dubuque Mar 4 '15 at 15:02 HINT $\displaystyle\rm\ \ mod\ 23:\ x\: \equiv\: \frac{-19}{100}\: \equiv\: \frac{4}{100}\: \equiv\: \frac{1}{25}\: \equiv\: \frac{24}{2}\: \equiv\: 12\:,\$ i.e. $\rm\ x\: =\: 12 + 23\ n\:.$ - Beware Modular fraction arithmetic is well-defined only for fractions with denominator coprime to the modulus. See here for further discussion. – Bill Dubuque Mar 4 '15 at 14:53 If you take the equation mod $23$, you find $8x \equiv 4 \pmod{23}$ and by inspection, this is satisfied by $x \equiv 12 \pmod{23}$. To find this, you use the Extended Euclidean algorithm - I find this version of the Euclid-Wallis Algoritmn a bit more user friendly. robjohn did a great job of explaining how it works so I offer this with no further explaination. Please note that I put the multiplier next to the row that gets multiplied by it. Also note that I multiply and then add, so my multipliers have a different sign than his. It's really the same thing, I just think this way is a bit more transparent. I also omitted the $0"$ row. I don't think it's worth the effort used to calculate it. |100 1 0 -4 | 23 0 1 -3 | 8 1 -4 | -1 -3 13 | 1 3 -13 and we conclude that 3(100)-13(23) = 1 - The continued fraction solution goes as follows: Expand 100/23 into a continued fraction (I'm essentially using the GCD algorithm): \begin{align*} \frac{100}{23} & = 4 + \frac{8}{23}\\ & = 4 + \frac{1}{\frac{23}{8}}\\ & = 4 + \cfrac{1}{2 + \cfrac{7}{8}}\\ & = 4 + \cfrac{1}{2 + \cfrac{1}{\frac{8}{7}}}\\ & = 4 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{7}}} \end{align*} Now that all the numerators are $1$, you're done getting the continued fraction, often written as the list of partial quotients like this: $100/23 = [4,2,1,7]$ You can of course check: $4 + 1/(2 + 1/(1 + 1/7)) = 100/23$, the last equation above. You can change the $7$ at the end to $6 + 1/1$ to get an odd number of partial quotients but you don't have to. Now if you look at $[4,2,1]$, which is the next to last convergent (the last being $[4,2,1,7]=100/23$) you get $$4 + \frac{1}{2 + \frac{1}{1}} = \frac{13}{3}$$ The difference of cross product of the numerators and denominators of successive convergents is always $+1$ or $-1$, i.e: $$100 \cdot 3 - 13 \cdot 23 = 1$$ If we multiply though by $-19$ we get: $$100 \cdot (3 \cdot -19) - 23 \cdot (13 \cdot -19) = -19$$ so a particular solution $x_0,y_0$ is \begin{align*} x_0 & = 3 \cdot -19 = -57\\ y_0 & = 13 \cdot -19 = -247 \end{align*} Since for all integer $t$ $$100 \cdot 23t - 23 \cdot 100t = 0$$ we can add that equation to $$100 \cdot -57 - 23 \cdot -247 = -19$$ and get $$100(23t-57) - 23(100t-247) = -19$$ so the general solution is \begin{align*} x & = 23t - 57\\ y & = 100t - 247 \end{align*} To get the exact Wolfram answer change variables to $n = t-3$ $$x = 23(n+3)-57 = 23n+69-57 = 23n + 12$$ - Please see this tutorial on how to typeset mathematics on this site. – N. F. Taussig Sep 28 '15 at 18:40 You made a typographical error in your final line. $23n + 69 - 57 = 23n \color{red}{+} 12$. – N. F. Taussig Sep 28 '15 at 18:41 Thanks for the note -- hopefully things are ok now – The Headless Mathematician Sep 28 '15 at 18:44 Thanks for the formatting help too. I think you and I tried to add formatting at the same time, but it seems to have ended up ok. – The Headless Mathematician Sep 28 '15 at 18:53 [For the following paragraphs, please refer to the figure at the end of the last paragraph (the figure is also available in PDF).] The manipulations performed from steps (0) to (16) were designed to create the linear system of equations (0a), (5a), (11a) and (16a). The manipulations end when the absolute value of a coefficient of the latest equation added is 1 (see (16a)). Equation (0a) is given. It is possible to infer equations (5a), (11a) and (16a) from (5), (11) and (16) respectively without performing manipulations (0) to (16) directly. In every case, select the smallest absolute value coefficient, generate the next equation by replacing every coefficient with the remainder of the coefficient divided by the selected coefficient (smallest absolute value coefficient) – do the same with the right-hand constant – and add the new variable whose coefficient is the smallest absolute value coefficient. If the new equation has a greatest common divisor greater than one, divide the equation by the greatest common divisor. Stop when the absolute value of a coefficient of the latest equation added is 1. Then proceed to solve the linear system of equations. - if you put $x=23n+12$ into equation $100x-23y=-19$ you will have $y$ as a linear function of $n$ then you can put various n and find (x,y) such that x,y are solution of equation $$x=23n+12 \\100(23n+12)-23y=-19\\100(23n)+1200-23y=-19\\23y=100(23n)+1219$$ divide by $23$ $$y=100n+53$$ so $$n\in \mathbb{Z}\\\left\{\begin{matrix} x=23n+12\\ y=100n+53 \end{matrix}\right.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034814238548279, "perplexity": 262.6542532769543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163421.31/warc/CC-MAIN-20160205193923-00013-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/142062-parametric-equation.html
1. ## parametric equation 5)Consider the parametric equation What is the length of the curve for to ? How do you do this problem? I am studying for my final and I dont know how to! 2. Originally Posted by nhatie 5)Consider the parametric equation What is the length of the curve for to ? How do you do this problem? I am studying for my final and I dont know how to! length of the curve $\int_{0}^{\frac{9\pi}{10}} \sqrt{\left(\frac{dx}{d\theta}\right)^2 + \left(\frac{dy}{d\theta}\right)^2 } d\theta$ 3. Originally Posted by Amer length of the curve $\int_{0}^{\frac{9\pi}{10}} \sqrt{\left(\frac{dx}{d\theta}\right)^2 + \left(\frac{dy}{d\theta}\right)^2 } d\theta$ So what I suppose to do next? 4. find $\frac{dx}{d\theta} = 12(-\sin \theta + (\sin \theta + \theta \cos \theta ) = 12(\theta \cos \theta$ $\frac{dy}{d\theta} = 12(\cos \theta - ( \cos \theta - \theta \sin \theta )=12(\theta \sin \theta)$ $\left(\frac{dx}{d\theta}\right)^2 + \left(\frac{dy}{d\theta}\right)^2 = 144 \theta^2 \cos^2 \theta + 144 \theta^2 \sin^2 \theta = 144\; \theta^2 ( \sin ^2 \theta + \cos ^2 \theta)$ it is easy now hint $sin^2 a + cos ^2 a = 1$ 5. Originally Posted by Amer find $\frac{dx}{d\theta} = 12(-\sin \theta + (\sin \theta + \theta \cos \theta ) = 12(\theta \cos \theta$ $\frac{dy}{d\theta} = 12(\cos \theta - ( \cos \theta - \theta \sin \theta )=12(\theta \sin \theta)$ $\left(\frac{dx}{d\theta}\right)^2 + \left(\frac{dy}{d\theta}\right)^2 = 144 \theta^2 \cos^2 \theta + 144 \theta^2 \sin^2 \theta = 144\; \theta^2 ( \sin ^2 \theta + \cos ^2 \theta)$ it is easy now hint $sin^2 a + cos ^2 a = 1$ i got int from 0 to 9pi/10 12sinx =>-12cosx | 0 to 9pi/10 is it rite? 6. $\int_{0}^{\frac{9\pi}{10}} \sqrt{144 \theta^2 (\sin ^2 \theta + \cos^2 \theta) } d\theta$ but $\sin ^2 \theta + \cos ^2 \theta = 1$ so $\int_{0}^{\frac{9\pi}{10}} 12\theta d\theta = 12\left(\frac{\theta ^2 }{2}\right) \mid_{0}^{\frac{9\pi}{10}}$ $6 (9\pi/10)^2$ 7. Originally Posted by Amer $\int_{0}^{\frac{9\pi}{10}} \sqrt{144 \theta^2 (\sin ^2 \theta + \cos^2 \theta) } d\theta$ but $\sin ^2 \theta + \cos ^2 \theta = 1$ so $\int_{0}^{\frac{9\pi}{10}} 12\theta d\theta = 12\left(\frac{\theta ^2 }{2}\right) \mid_{0}^{\frac{9\pi}{10}}$ $6 (9\pi/10)^2$ thank you so much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558915495872498, "perplexity": 1680.1755465668505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00612.warc.gz"}
http://cp3-origins.dk/a/1987
Menu ## Electrostatics of Coulomb Gas, Lattice Paths, and Discrete Polynuclear Growth Preprint number: CP3-Origins-2010-9 Authors: Niko Jokela (Technion & University of Haifa), Matti Järvinen (CP3-Origins), and Esko Keski-Vakkuri (Helsinki Institute of Physics) External link: arXiv.org Share this page We study the partition function of a two-dimensional Coulomb gas on a circle, in the presence of external pointlike charges, in a double scaling limit where both the external charges and the number of gas particles are large. Our original motivation comes from studying amplitudes for multi-string emission from a decaying D-brane in the high energy limit. We analyze the scaling limit of the partition function and calculate explicit results. We also consider applications to random matrix theory. The partition functions can be related to random scattering, or to weights of lattice paths in certain growth models. In particular, we consider the discrete polynuclear growth model and use our results to compute the cumulative probability density for the height of long level-1 paths. We also obtain an estimate for an almost certain maximum height.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390935897827148, "perplexity": 867.2821785274951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00516.warc.gz"}
http://mathoverflow.net/questions/66551/structure-on-the-set-of-elliptic-curves-via-j-invariant
# Structure on the set of elliptic curves via $j$-invariant Let $k$ be an algebraically closed of characteristic $\neq 2,3$. The $j$-invariant induces a bijection $\{\text{elliptic curves over } k\}/\cong \longrightarrow k.$ Perhaps this is a silly question: I'm curious if there is any meaningful geometric interpretation of the induced ring structure on the set of isomorphism-classes of elliptic curves. The resulting operations can be written down in Weierstrass forms, but the formulas are, of course, not illuminating at all. The zero element is $y^2+y=x^3$. It would be even more interesting what structures there are on the category of elliptic curves over $k$, so without modding out isomorphisms. - I don't see why there should be. Over $k$, there's no reason to single out the $j$-invariant over $\frac{aj + b}{cj + d}$ for $ad - bc \neq 0$, is there? The singling out only occurs for integrality reasons and even then I don't think that implies the addition or multiplication is meaningful. –  Qiaochu Yuan May 31 '11 at 14:40 The $j$-invariant actually induces a bijection in all characteristics, you don't need characteristic greater than $3$. But as Qiaochu said, there's no reason why $j(E_1)+j(E_2)$ should have any significance. Working over $\mathbb{C}$, the space of elliptic curves is the space of lattices in $\mathbb{C}$, with isomorphism being the relation $L_1\sim L_2$ if there is a $c\in\mathbb{C}^*$ with $cL_1=L_2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270018935203552, "perplexity": 135.84345390551448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00204-ip-10-179-60-89.ec2.internal.warc.gz"}
https://socratic.org/questions/57e459b17c014902599eb6a9
Algebra Topics # How do you find the pattern for a list? Feb 13, 2018 A few thoughts... #### Explanation: I think you are referring to some kind of sequence, but it is not clear what sort of problems you want to address. Let's look at a few sequence types and their associated rules. Arithmetic sequence An arithmetic sequence has a common difference between terms. So to get from one term to the next you need to add the common difference. We can write a recursive rule: ${a}_{n + 1} = {a}_{n} + d$ where $d$ is the common difference. We also need to specify the starting point, the first term $a$: ${a}_{1} = a$ We can write the explicit rule for a general term as: ${a}_{n} = a + d \left(n - 1\right)$ Geometric sequence A geometric sequence has a common ratio between terms. So to get from one term to the next you need to multiply by the common ratio. We can write a recursive rule: ${a}_{n + 1} = {a}_{n} \cdot r$ where $r$ is the common ratio. We also need to specify the starting point: ${a}_{1} = a$ We can write the explicit rule for a general term as: ${a}_{n} = a {r}^{n - 1}$ Linear recursion A sequence may also be defined using a linear recursive rule, where each successive term is based on the two (or more) previous terms. The classic example of such a sequence is the Fibonacci sequence, definable recursively by: ${F}_{0} = 0$ ${F}_{1} = 1$ ${F}_{n + 2} = {F}_{n + 1} + {F}_{n}$ It starts: $0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , 34 , 55 , 89 , 144 , \ldots$ How do you get an explicit rule for a linear recursion like this? Consider a geometric sequence: $1 , x , {x}^{2} , {x}^{3} , \ldots$ If it satisfies the linear recursive rule for $1 , x , {x}^{2}$, then it will continue to satisfy it. So given a rule: ${a}_{n + 2} = p {a}_{n + 1} + q {a}_{n}$ we can associate the quadratic equation: ${x}^{2} = p x + q$ Calling the two roots of this equation $\alpha$ and $\beta$ note that any sequence given by: ${a}_{n} = A {\alpha}^{n} + B {\beta}^{n}$ will satisfy the recursive rule. We then just have to choose $A$ and $B$ so that the first two terms match the initial two terms of the sequence. In the case of the Fibonacci sequence, we find: ${F}_{n} = \frac{1}{\sqrt{5}} \left({\varphi}^{n} - {\varphi}^{- n}\right)$ where $\varphi = \frac{1}{2} + \frac{1}{2} \sqrt{5}$ ##### Impact of this question 175 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202913045883179, "perplexity": 356.19009163723655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572491.38/warc/CC-MAIN-20190916060046-20190916082046-00087.warc.gz"}
http://mathhelpforum.com/statistics/39786-confidence-interval.html
# Math Help - Confidence interval 1. ## Confidence interval a sample of 10 observations is taken from a normal population for which the population standard deviation is known to be 5. the sample mean is 20. find the 95% confidence interval for the population mean 2. Originally Posted by gogothesaint a sample of 10 observations is taken from a normal population for which the population standard deviation is known to be 5. the sample mean is 20. find the 95% confidence interval for the population mean You will find several worked examples on MHF. Where are you stuck?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940457344055176, "perplexity": 301.0755653792644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448389.58/warc/CC-MAIN-20151124205408-00131-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/236680/multiple-choice-questions-on-relations-and-some-of-their-properties/236703
# Multiple choice questions on relations and some of their properties I'm confused about these 3 selected problems. I have the solutions for each, if necessary, but I'm much more interested in understanding the material. If anyone can offer a clear, concise, and intuitive explanation for each- or even just one- I'd be entirely grateful. Let $R$ be the relation on the set of people consisting of $(a,b)$ where $a$ is the parent of $b$. Let $S$ be the relation on the set of people consisting of $(a,b)$ where $a$ and $b$ are siblings. What are $S\circ R$ and $R\circ S$? A) $(a,b)$ where $a$ is a parent of $b$ and $b$ has a sibling; $(a,b)$ where $a$ is the aunt or uncle of $b$. B) $(a,b)$ where $a$ is the parent of $b$ and $a$ has a sibling; $(a,b)$ where $a$ is the aunt or uncle of $b$. C) $(a,b)$ where $a$ is the sibling of $b$'s parents; $(a,b)$ where $a$ is $b$'s niece or nephew. D) $(a,b)$ where $a$ is the parent of $b$; $(a,b)$ where $a$ is the aunt or uncle of $b$. On the set of all integers, let $(x,y) \in R$ iff $xy \geq 1$. Is relation R reflexive, symmetric, antisymmetric, transitive? A) Yes, No, No, Yes B) No, Yes, No, Yes C) No, No, No, Yes D) No, Yes, Yes, Yes E) No, No, Yes, No At what smallest power of R do we find the connectivity relation R* in this diagram? A) $R^* = R$ B) $R^* = \bigcup_{k=1}^2 R^k$ C) $R^* = \bigcup_{k=1}^3 R^k$ D) $R^* = \bigcup_{k=1}^4 R^k$ E) $R^* = \bigcup_{k=1}^5 R^k$ EDIT: I understand the first two pretty well. That is, I know what they're asking for, but my path to the solution doesn't seem to be correct. For the first one, what do I need to know about the way relational compositions work that will make this problem more manageable? For the second, I need help with the symmetric and antisymmetric part (how does it work here)? And the third, which I'm having the most trouble with- what exactly is $R^*$, and how does it relate to the notation in each of the answers ($\bigcup$)? - $(a,b)\in S\circ R$ if there is some $c$ such that $(a,c)\in R$ and $(c,b)\in S$. That is, $a$ is a parent of $c$ and $c$ is a sibling of $b$--equivalently, $a$ is a parent of $b$ and $b$ has a sibling. $(a,b)\in R\circ S$ if there is some $c$ such that $(a,c)\in S$ and $(c,b)\in R$. That is, $a$ is a sibling of $c$ and $c$ is a parent of $b$--equivalently, $a$ is an aunt or uncle of $b$. To determine if $R$ is reflexive, we need to determine if $x^2\geq 1$ for all integers $x$; symmetric, if $yx\geq 1$ whenever $xy\geq 1$; antisymmetric, if $xy\geq 1$ and $yx\geq 1$ together imply $x=y$; transitive, if $xy\geq 1$ and $yz\geq 1$ together implies $xz\geq 1$. The first three are almost trivial to prove or find counterexamples for. For the last, note that $xy\geq 1$ if and only if $x,y$ are non-zero and have the same sign. Does it follow then that $xy\geq 1$ and $yz\geq 1$ implies $xz\geq 1$? For the last, we need to look at each pair of nodes $(x,y)$--where $x$ and $y$ may be the same--and determine the fewest arrows we need to travel to get from $x$ to $y$. Call this number $\ell(x,y)$. The maximum of $\ell(x,y)$ over all $25$ pairs $(x,y)$ will give you the least $n$ that you'll need to get $R^*=\bigcup_{k=1}^nR^k$. But why is this? Let me dig into the definitions of $R^k$ and $R^*$ to make this clearer. In general, $R^k$ is alternative notation for repeated composition--e.g.: $R^1=R$, $R^2=R\circ R$, $R^3=R\circ R\circ R$, etc. In this circumstance, $R^k$ is the set of all vertex pairs $(x,y)$ such that we can travel from $x$ to $y$ along a path composed of exactly $k$ of the arrows in the given diagram. (Do you see why?) For example, $$R^1=\bigl\{(A,D),(B,A),(B,C),(C,B),(C,E),(D,C),(D,E),(E,B)\bigr\}$$ $$R^2=\bigl\{(A,C),(A,E),(B,B),(B,D),(B,E),(C,A),(C,B),(C,C),(D,B),(E,A),(E,B)\bigr\}$$ In general, $R^*$ is the least reflexive transitive relation containing $R$. In this circumstance, any transitive relation containing $R$ will contain all pairs of nodes $(x,y)$ such that there is some path from $x$ to $y$ in the given diagram. Observe that we can start at any node and follow a circuit to get back to the starting node, and we can start at any node and travel to any other node. The least transitive relation containing $R$, then, will be the set of all vertex pairs $(x,y)$ (including those with $x=y$). Then $R^*=\bigcup_{k=1}^\infty R^k$, but we don't need arbitrarily long paths to get the job done. In fact, none of the paths we need are any longer than $4$, so we can rule out E, but the only simple circuit starting at and returning to A has length $4$, so the answer is D. - Thanks so much for the response. Do you mind just elaborating on the third one some more? I'm having trouble understanding $R^*$'s use and it's relation with the $\bigcup$ notation. – Bob John Nov 13 '12 at 21:09 Thanks, really appreciate it. – Bob John Nov 13 '12 at 21:32 I've elaborated on the third one. Does that help? – Cameron Buie Nov 13 '12 at 22:04 So for $R^*$ to be "fulfilled" we need to find some $n$ for $\bigcup_{k=1}^n R^k$ such that if we start at $\bf{any}$ vertex we're able to return to that same vertex by travelling through exactly (at least?) k arrows, but never more? – user1038665 Nov 14 '12 at 12:13 @user1038665: Very close. We need to be able to start at any vertex and get to any vertex (possibly the same one) by travelling through at least $1$ arrow and at most $n$ arrows--that is, exactly $k$ arrows for some $1\leq k\leq n$, which is where the union from $k=1$ to $k=n$ comes in. – Cameron Buie Nov 14 '12 at 12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611265659332275, "perplexity": 167.9234574428791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825365.1/warc/CC-MAIN-20160723071025-00185-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/64582/when-is-a-complete-fan-a-normal-fan
# When is a complete fan a normal fan? Is there a characterization for when a complete fan in $\mathbf{R}^n$ is the normal fan of a polytope? Thanks! - Indeed if you have a convex polytope $P$ with vertices $v_i$ in $\mathbb R^n$ this defines you a collection of linear functions $v^*_i$ on $\mathbb R^{n*}$ and the function $\max_i (v^*_i)$ will be a convex function of the dual fan satisfying the above properties.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8289369344711304, "perplexity": 138.88324783440623}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246653426.7/warc/CC-MAIN-20150417045733-00121-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/law-of-the-lever-conservation-of-energy-or-angular-momentum.833317/
# Law of the lever: Conservation of energy or angular momentum Tags: 1. Sep 19, 2015 ### greypilgrim Hi, Some "derivations" of the law of the lever argue with conservation of energy: If one arm of the lever of length $r_1$ is pulled by a distance $s_1$ with force $F_1$, the other arm moves by a distance $s_2=s_1 \frac{r_2}{r_1}$. From conservation of energy $E=F_1 s_1=F_2 s_2$ it follows $$F_2=F_1 \frac{s_1}{s_2}=F_1 \frac{r_1}{r_2}\enspace.$$ However, the law of the lever also holds in static situations where $s_1=s_2=0$ and no work is being done and above derivation breaks down. A derivation that both includes moving and static situations uses the fact that all torques must vectorially add up to zero which follows from conservation of angular momentum. So I wonder if the derivation using conservation of energy only works coincidentally, because energy and torque share the same unit. From a Noetherian perspective, the derivations are very different, the first following from homogeneity in time, the other from isotropy in space. As a more general question, is it mere coincidence that energy and torque have the same unit or is there more to it? 2. Sep 19, 2015 ### Staff: Mentor You can consider virtual displacements if you like. The limit for $s_2 \to 0$ is well-defined and gives the same result. The attempt to divide by zero is a purely mathematical problem. 3. Sep 19, 2015 ### A.T. You can derive the static lever law without invoking the concept of torque, using only linear forces on a truss structure. There were several threads on this here. 4. Sep 20, 2015 ### greypilgrim That's interesting, so the law of the lever can actually be derived either from conservation of energy, conservation of linear momentum OR conservation of angular momentum independently, hence by Noether's theorem either from homogeneity in time, in space or isotropy in space? Similar Discussions: Law of the lever: Conservation of energy or angular momentum
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937320351600647, "perplexity": 513.8445084164375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00680.warc.gz"}
https://en.academic.ru/dic.nsf/enwiki/585528
# Plücker coordinates  Plücker coordinates In geometry, Plücker coordinates, introduced by Julius Plücker in the 19th century, are a way to assign six homogenous coordinates to each line in projective 3-space, "P"3. Because they satisfy a quadratic constraint, they establish a one-to-one correspondence between the 4-dimensional space of lines in "P"3 and points on a quadric in "P"5 (projective 5-space). A predecessor and special case of Grassmann coordinates (which describe "k"-dimensional linear subspaces, or "flats", in an "n"-dimensional Euclidean space), Plücker coordinates arise naturally in geometric algebra. They have proved useful for computer graphics, and also can be extended to coordinates for the screws and wrenches in the theory of kinematics used for robot control. Geometric intuition A line "L" in 3-dimensional Euclidean space is determined by two distinct points that it contains, or by two distinct planes that contain it. Consider the first case, with points "x" = ("x"1,"x"2,"x"3) and "y" = ("y"1,"y"2,"y"3). The vector displacement from "x" to "y" is nonzero because the points are distinct, and represents the "direction" of the line. That is, every displacement between points on "L" is a scalar multiple of "d" = "y"−"x". If a physical particle of unit mass were to move from "x" to "y", it would have a moment about the origin. The geometric equivalent is a vector whose direction is perpendicular to the plane containing "L" and the origin, and whose length equals the area of the triangle formed by the displacement and the origin. Treating the points as displacements from the origin, the moment is "m" = "x"y", where "×" denotes the vector cross product. The area of the triangle is proportional to the length of the segment between "x" and "y", considered as the base of the triangle; it is not changed by sliding the base along the line, parallel to itself. By definition the moment vector is perpendicular to every displacement along the line, so "d"•"m" = 0, where "•" denotes the vector dot product. Although neither "d" nor "m" alone is sufficient to determine "L", together the pair does so uniquely, up to a common (nonzero) scalar multiple which depends on the distance between "x" and "y". That is, the coordinates : ("d":"m") = ("d"1:"d"2:"d"3:"m"1:"m"2:"m"3) may be considered homogeneous coordinates for "L", in the sense that all pairs (λ"d":λ"m"), for λ ≠ 0, can be produced by points on "L" and only "L", and any such pair determines a unique line so long as "d" is not zero and "d"•"m" = 0. Furthermore, this approach extends to include points, lines, and a plane "at infinity", in the sense of projective geometry. : Example. Let "x" = (2,3,7) and "y" = (2,1,0). Then ("d":"m") = (0:−2:−7:−7:14:−4). Alternatively, let the equations for points "x" of two distinct planes containing "L" be : 0 = "a" + "a"•"x": 0 = "b" + "b"•"x" . Then their respective planes are perpendicular to vectors "a" and "b", and the direction of "L" must be perpendicular to both. Hence we may set "d" = "a"b", which is nonzero because "a" and "b" are neither zero nor parallel (the planes being distinct and intersecting). If point "x" satisfies both plane equations, then it also satisfies the linear combination : Dual coordinates are convenient in some computations, and we can show that they are equivalent to primary coordinates. Specifically, let ("i","j","k","l") be an even permutation of (0,1,2,3); then : $p_\left\{ij\right\} = p^\left\{kl\right\} . ,!$ Geometry To relate back to the geometric intuition, take "x"0 = 0 as the plane at infinity; thus the coordinates of points "not" at infinity can normalized so that "x"0 = 1. Then "M" becomes : and setting "x" = ("x"1,"x"2,"x"3) and "y" = ("y"1,"y"2,"y"3), we have "d" = ("p"01,"p"02,"p"03) and "m" = ("p"23,"p"31,"p"12). Dually, we have "d" = ("p"23,"p"31,"p"12) and "m" = ("p"01,"p"02,"p"03). Bijection between lines and Klein quadric Plane equations If the point z = ("z"0:"z"1:"z"2:"z"3) lies on "L", then the columns of : are linearly dependent, so that the rank of this larger matrix is still 2. This implies that all 3×3 submatrices have determinant zero, generating four (4 choose 3) plane equations, such as : Since both 3×3 determinants have duplicate columns, the right hand side is identically zero. Point equations Letting ("x"0:"x"1:"x"2:"x"3) be the point coordinates, four possible points on a line each have coordinates "x""i" = "p""ij", for "j" = 0…3. Some of these possible points may be inadmissible because all coordinates are zero, but since at least one Plücker coordinate is nonzero, at least two distinct points are guaranteed. Bijectivity If ("q"01:"q"02:"q"03:"q"23:"q"31:"q"12) are the homogeneous coordinates of a point in "P"5, without loss of generality assume that "q"01 is nonzero. Then the matrix : has rank 2, and so its columns are distinct points defining a line "L". When the "P"5 coordinates, "q""ij", satisfy the quadratic Plücker relation, they are the Plücker coordinates of "L". To see this, first normalize "q"01 to 1. Then we immediately have that for the Plücker coordinates computed from "M", "p""ij" = "q""ij", except for : $p_\left\{23\right\} = - q_\left\{03\right\} q_\left\{12\right\} - q_\left\{02\right\} q_\left\{31\right\} . ,!$ But if the "q""ij" satisfy the Plücker relation "q"23+"q"02"q"31+"q"03"q"12 = 0, then "p""23" = "q""23", completing the set of identities. Consequently, α is a surjection onto the algebraic variety consisting of the set of zeros of the quadratic polynomial : $p_\left\{01\right\}p_\left\{23\right\}+p_\left\{02\right\}p_\left\{31\right\}+p_\left\{03\right\}p_\left\{12\right\} . ,!$ And since α is also an injection, the lines in "P"3 are thus in bijective correspondence with the points of this quadric in "P"5, called the Plücker quadric or Klein quadric. Uses Plücker coordinates allow concise solutions to problems of line geometry in 3-dimensional space, especially those involving incidence. Line-line crossing Two lines in "P"3 are either skew or coplanar, and in the latter case they are either coincident or intersect in a unique point. If "p""ij" and "p"′"ij" are the Plücker coordinates of two lines, then they are coplanar precisely when "d"⋅"m"′+"m"⋅"d"′ = 0, as shown by : When the lines are skew, the sign of the result indicates the sense of crossing: positive if a right-handed screw takes "L" into "L"′, else negative. The quadratic Plücker relation essentially states that a line is coplanar with itself. Line-line join In the event that two lines are coplanar but not parallel, their common plane has equation : 0 = ("m"•"d"′)"x"0 + ("d"d"′)•"x" , where "x" = ("x"1,"x"2,"x"3). The slightest perturbation will destroy the existence of a common plane, and near-parallelism of the lines will cause numeric difficulties in finding such a plane even if it does exist. Line-line meet Dually, two coplanar lines, neither of which contains the origin, have common point : ("x"0 : "x") = (dm′:m×m′) . To handle lines not meeting this restriction, see the references. Plane-line meet Given a plane with equation : $0 = a^0x_0 + a^1x_1 + a^2x_2 + a^3x_3 , ,!$ or more concisely 0 = "a"0"x"0+"a"•"x"; and given a line not in it with Plücker coordinates ("d":"m"), then their point of intersection is : ("x"0 : "x") = ("a"•"d" : "a"m" − "a"0"d") . The point coordinates, ("x"0:"x"1:"x"2:"x"3), can also be expressed in terms of Plücker coordinates as : $x_i = sum_\left\{j e i\right\} a^j p_\left\{ij\right\} , qquad i = 0 ldots 3 . ,!$ Point-line join Dually, given a point ("y"0:"y") and a line not containing it, their common plane has equation : 0 = ("y"•"m") "x"0 + ("y"d"−"y"0"m")•"x" . The plane coordinates, ("a"0:"a"1:"a"2:"a"3), can also be expressed in terms of dual Plücker coordinates as : $a^i = sum_\left\{j e i\right\} y_j p^\left\{ij\right\} , qquad i = 0 ldots 3 . ,!$ Line families Because the Klein quadric is in "P"5, it contains linear subspaces of dimensions one and two (but no higher). These correspond to one- and two-parameter families of lines in "P"3. For example, suppose "L" and "L"′ are distinct lines in "P"3 determined by points x, y and x′, y′, respectively. Linear combinations of their determining points give linear combinations of their Plücker coordinates, generating a one-parameter family of lines containing "L" and "L"′. This corresponds to a one-dimensional linear subspace belonging to the Klein quadric. Lines in plane If three distinct and non-parallel lines are coplanar; their linear combinations generate a two-parameter family of lines, all the lines in the plane. This corresponds to a two-dimensional linear subspace belonging to the Klein quadric. Lines through point If three distinct and non-coplanar lines intersect in a point, their linear combinations generate a two-parameter family of lines, all the lines through the point. This also corresponds to a two-dimensional linear subspace belonging to the Klein quadric. Ruled surface A ruled surface is a family of lines that is not necessarily linear. It corresponds to a curve on the Klein quadric. For example, a hyperboloid of one sheet is a quadric surface in "P"3 ruled by two different families of lines, one line of each passing through each point of the surface; each family corresponds under the Plücker map to a conic section within the Klein quadric in "P"5. Line geometry During the nineteenth century, "line geometry" was studied intensively. In terms of the bijection given above, this is a description of the intrinsic geometry of the Klein quadric. References * cite book last = Hodge first = W. V. D. authorlink = W. V. D. Hodge coauthors = D. Pedoe title = Methods of Algebraic Geometry, Volume I (Book II) publisher = Cambridge University Press date = 1994 origdate = 1947 id = ISBN 978-0-521-46900-5 * cite book last = Behnke first = H. coauthors = F. Bachmann, K. Fladt, H. Kunle (eds.) others = trans. S. H. Gould title = Fundamentals of Mathematics, Volume II: Geometry publisher = MIT Press date = 1984 id = ISBN 978-0-262-52094-2 From the German: "Grundzüge der Mathematik, Band II: Geometrie". Vandenhoeck & Ruprecht. * cite book last = Stolfi first = Jorge title = Oriented Projective Geometry date = 1991 id = ISBN 978-0-12-672025-9 From original Stanford Ph.D. dissertation, "Primitives for Computational Geometry", available as [http://ftp.digital.com/pub/compaq/SRC/research-reports/abstracts/src-rr-036.html DEC SRC Research Report 36] . * cite journal last = Shoemake first = Ken title = Plücker Coordinate Tutorial journal = Ray Tracing News volume = 11 issue = 1 date = 1998 url = http://www.acm.org/tog/resources/RTNews/html/rtnv11n1.html#art3 * cite book last = Mason first = Matthew T. coauthors = J. Kenneth Salisbury title = Robot Hands and the Mechanics of Manipulation publisher = MIT Press date = 1985 id = ISBN 978-0-262-13205-3 * cite journal last = Hohmeyer first = M. coauthors = S. Teller title = Determining the Lines Through Four Lines journal = Journal of Graphics Tools volume = 4 issue = 3 pages = 11–22 publisher = A K Peters date = 1999 url = http://people.csail.mit.edu/seth/pubs/TellerHohmeyerJGT2000.pdf format = PDF id = ISSN|1086-7651 accessdate = Wikimedia Foundation. 2010. ### Look at other dictionaries: • Plucker matrix — Plucker matrices are a representation of a line used in relation to 3D homogeneous coordinates. Specifically, a Plucker matrix is a 4 times;4 skew symmetric homogeneous matrix, defined as mathbf{P} = AB^T BA^T, where A and B are two homogeneous… …   Wikipedia • Plücker embedding — In the mathematical fields of algebraic geometry and differential geometry (as well as representation theory), the Plücker embedding describes a method to realise the Grassmannian of all k dimensional subspaces of a vector space V , such as R n… …   Wikipedia • Line coordinates — In geometry, line coordinates are used to specify the position of a line just as point coordinates (or simply coordinates) are used to specify the position of a point. Contents 1 Lines in the plane 2 Tangential equations 3 Tangential equation of… …   Wikipedia • Chow coordinates — In mathematics, and more particularly in the field of algebraic geometry, Chow coordinates are a generalization of Plücker coordinates, applying to m dimensional algebraic varieties of degree d in Pn, that is, n dimensional projective space. They …   Wikipedia • Robotics conventions — There are many conventions used in the robotics research field. This article summarises these conventions. Contents 1 Line representations 2 Non minimal vector coordinates 2.1 Plücker coordinates …   Wikipedia • List of mathematics articles (P) — NOTOC P P = NP problem P adic analysis P adic number P adic order P compact group P group P² irreducible P Laplacian P matrix P rep P value P vector P y method Pacific Journal of Mathematics Package merge algorithm Packed storage matrix Packing… …   Wikipedia • Coordinate system — For geographical coordinates on Wikipedia, see Wikipedia:WikiProject Geographical coordinates. In geometry, a coordinate system is a system which uses one or more numbers, or coordinates, to uniquely determine the position of a point or other… …   Wikipedia • Grassmannian — In mathematics, a Grassmannian is a space which parameterizes all linear subspaces of a vector space V of a given dimension. For example, the Grassmannian Gr 1( V ) is the space of lines through the origin in V , so it is the same as the… …   Wikipedia • Dual quaternion — The set of dual quaternions is an algebra that can be used to represent spatial rigid body displacements.[1] A dual quaternion is an ordered pair of quaternions  = (A, B) and therefore is constructed from eight real parameters. Because rigid… …   Wikipedia • Felix Klein — Pour les articles homonymes, voir Klein. Felix Klein …   Wikipédia en Français
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580315709114075, "perplexity": 1627.6733494705722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145966.48/warc/CC-MAIN-20200224163216-20200224193216-00327.warc.gz"}
https://methods.sagepub.com/Reference/the-sage-encyclopedia-of-communication-research-methods/i8090.xml
• Entry • Entries A-Z • Subject index ### Margin of Error The term margin of error is most commonly used in the scientific literature to describe how close a sample statistic, $\stackrel{^}{\mathrm{\theta }},$ is to an unknown population parameter, θ. Assuming the sampling distribution of $\stackrel{^}{\mathrm{\theta }}$ is approximately symmetric, a confidence interval for θ will be $\stackrel{^}{\mathrm{\theta }}±m,$ where m is the margin of error. The margin of error in confidence intervals such as these is made up of two components: the confidence level of the interval and the standard deviation or standard error of the statistic estimating the unknown parameter θ. For example, suppose one is estimating the level of support for a public proposition, such as the legalization of marijuana. If sample ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869728684425354, "perplexity": 449.60084355932185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00604.warc.gz"}
http://mathoverflow.net/questions/86372/algorithm-for-image-of-a-free-group-homomorphism
# Algorithm for image of a free group homomorphism Let $G$ and $H$ be finitely generated free groups, and let $f:G\to H$ be a homomorphism specified by giving the images of the generators of $G$. Is there an algorithm which takes such an $f$ and a word $w\in H$ and tells if $w \in f(G)$? Is there such an algorithm in the special case where $G=H$? Thanks- - See gap-system.org/Manuals/pkg/fga/doc/manual.pdf section 2.3 for a GAP implementation. – Guntram Jan 22 '12 at 17:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568257331848145, "perplexity": 167.23823662021718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00067-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/inelastic-collision-with-spring.284306/
# Inelastic Collision with Spring 1. Jan 11, 2009 ### edud8 1. The problem statement, all variables and given/known data A .2kg mass traveling on a frictionless horizontal surface at a speed of 3 m/s. It hits a 1.3 kg mass at rest that is connected to a massless spring with a a spring constant of 100 newtons per meter. The other end of the spring is fixed. Calculate the linear momentum and kinetic energy of the combined masses immediately after the impact. 2. Relevant equations I can calculate the combined momentum of the masses but I don't know what to do with the spring. 3. The attempt at a solution The inelastic equation formula is (m1)(v1) + (m2)(v2) = (m1 + m2)(vf) where vf is the new velocity and then I would just plug in vf into: KE = 1/2(m)(v)^2 as v and get the kinetic energy. My problem is I don't know what to do with the spring. 2. Jan 11, 2009 ### rock.freak667 So you know that (m1)(v1) + (m2)(v2) = (m1 + m2)(vf) from conservation of momentum. You know m1,m2,v1 and v2 right? The question basically wants you to find the value of (m1+m2)(vf) 3. Jan 11, 2009 ### edud8 I get that but don't I have to do something with the spring? 4. Jan 11, 2009 ### Hunterbender Not really. The spring simply transfer the energy. Think of it as a mediator. KE of car 1 collide with car 2 ==> KE becomes PE in the spring ==> Spring Potential Energy pushes the on car 2 and becomes potential energy (assuming all energy is conserve and there is no heat loss anywhere) So spring is just there. Now, the fun question is: calculate the kinetic energy when the spring is being compress. Now then, you need to account for it, since the spring is potential rather than kinetic energy 5. Jan 11, 2009 ### edud8 So do i just set the kinetic energy of the masses equal to the potential energy of the spring like so: (1/2)mv^2 = (1/2)(k)(x)^2 ==> .5(1.5)(.4)^2 = .5(100)(x)^2 ==> .12J = 50x^2 so .0024 = x^2 ==> and Squareroot(.0024) = .0489m = x Then I plugged it in F = kx and got 4.89N which is the force of the spring. Now I'm stuck, a little help on how this helps me solve for the linear momentum or kinetic energy of the masses after impact. Similar Discussions: Inelastic Collision with Spring
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033833146095276, "perplexity": 855.2181581889856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607849.21/warc/CC-MAIN-20170524173007-20170524193007-00296.warc.gz"}
http://mathhelpforum.com/algebra/102945-remainder-print.html
# remainder • Sep 18th 2009, 06:00 AM mark remainder hi, the question i've got is, find the remainder when the polynomial P(x) is divided by the linear expression f(x): $P(x) = x^3 + 2$ $f(x) = x + 1$ i came up with remainder 2 but the book says its 1. can someone explain this to me? thankyou • Sep 18th 2009, 06:10 AM Plato Quote: Originally Posted by mark find the remainder when the polynomial P(x) is divided by the linear expression f(x): $P(x) = x^3 + 2$ $f(x) = x + 1$ i came up with 2 but the book says its 1. can someone explain this to me? By the remainder theorem it is just $P(-1)$. • Sep 18th 2009, 06:20 AM stapel Quote: Originally Posted by mark i came up with remainder 2 but the book says its 1. can someone explain this to me? Until we can see what you've done, I'm afraid there is no way to explain what error you might have made. Sorry! (Blush) • Sep 18th 2009, 06:32 AM mark ah, i've actually realised its got to be 1, i wasn't using the proper steps when figuring it out. sorry about that people.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012898564338684, "perplexity": 906.040871208896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689411.82/warc/CC-MAIN-20170922235700-20170923015700-00142.warc.gz"}
https://resonaances.blogspot.com/2015/11/a-year-at-13-tev.html
## Thursday, 12 November 2015 ### A year at 13 TeV A week ago the LHC finished the 2015 run of 13 TeV proton collisions.  The counter in ATLAS stopped exactly at 4 inverse femtobarns. CMS reports just 10% less, however it is not clear what fraction of these data is collected with their magnet on (probably about a half). Anyway, it should have been better, it could have been worse...   4 fb-1 is one fifth of what ATLAS and CMS collected in the glorious year 2012.  On the other hand, the higher collision energy in 2015 translates to larger production cross sections, even for particles within the kinematic reach of the 8 TeV collisions.  How this trade off work in practice depends on the studied process.  A few examples are shown in the plot below We see that, for processes initiated by collisions of a quark inside one proton with an antiquark inside the other proton, the cross section gain is the least favorable. Still, for hypothetical resonances heavier than ~1.7 TeV, more signal events were produced in the 2015 run than in the previous one. For example, for a 2 TeV W-prime resonance, possibly observed by ATLAS in the 8 TeV data, the net gain is 50%, corresponding to roughly 15 events predicted in the 13 TeV data. However, the plot does not tell the whole story, because the backgrounds have increased as well.  Moreover, when the main background originates from gluon-gluon collisions (as is the case for the W-prime search in the hadronic channel),  it grows faster than the signal.  Thus, if the 2 TeV W' is really there, the significance of the signal in the 13 TeV data should be comparable to that in the 8 TeV data in spite of the larger event rate. That will not be enough to fully clarify the situation, but the new data may make the story much more exciting if the excess reappears;  or much less exciting if it does not... When backgrounds are not an issue (for example, for high-mass dilepton resonances) the improvement in this year's data should be more spectacular. We also see that, for new physics processes initiated by collisions of a gluon in 1 proton with another gluon in the other proton, the 13 TeV run is superior everywhere above the TeV scale, and the signal enhancement is more spectacular. For example, at 2 TeV one gains a factor of 3 in signal rate. Therefore, models where the ATLAS diboson excess is explained via a Higgs-like scalar resonance will be tested very soon. The reach will also be extended for other hypothetical particles pair-produced in gluon collisions, such as  gluinos in the minimal supersymmetric model. The current lower limit on the gluino mass obtained by  the 8 TeV run is m≳1.4 TeV  (for decoupled squarks and massless neutralino). For this mass, the signal gain in the 2015 run is roughly a factor of 6. Hence we can expect the gluino mass limits will be pushed upwards soon, by about 200 GeV or so. Summarizing,  we have a right to expect some interesting results during this winter break. The chances for a discovery  in this year's data are non-zero,  and chances for a tantalizing hints of new physics (whether a real thing or a background fluctuation) are considerable. Limits on certain imaginary particles will be somewhat improved. However, contrary to my hopes/fears, this year is not yet the decisive one for particle physics.  The next one will be. gc said... Anonymous said... So, what are the rumors about the new data? Jester said... I haven't heard much yet. I guess the analyses with full 4 fb-1 have not been done yet. I suppose the rumors will start to emerge in the early November, before the CERN council meeting. Anonymous said... Goes Googling for "Cross Section". Welcome back Jester. You were missed. Michel Beekveld Anonymous said... you mean early December? Jester said... yes, December, sorry RBS said... Paris. Nour pleurons ensamble. Chris said... chris said... nothing to comment on the physics, just want to say that i'm very happy to see that you haven't stopped blogging :-) Anonymous said... @Chris: Oh come on. On average, every day several hundred people die in Paris. Do you plan to ask that question every day now? Don't get me wrong, it is horrible what happened there, but the risk to die in a traffic accident or similar is orders of magnitude higher. Concerning physics: I didn't see public results using more than ~100/pb so far, but analyses with the full dataset are in progress. Jester said... That's right, statistically it would require a 3 sigma fluctuation to get killed that night. RBS said... "Several hundred people" die daily of (presumably, unnatural) causes in just one of the dozens of major cities on this planet?! Just shows us how far we have yet to go, and not only in the filed of HEP. Is it so, though? Here in a million-size city in the Canada's East person's death is still not a routine, one-of-the-hundreds event. Hope it stays this way for a while (the longer the better, actually). Anonymous said... I didn't say the hundreds die of unnatural reasons. Not-age-related deaths within a month still exceed the deaths from terror significantly, even if you consider Paris in November only.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560504913330078, "perplexity": 2162.8969976511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00619.warc.gz"}
http://www.math.fsu.edu/~aluffi/archive/paper7.abs.html
Submodules of the deficiency modules and an extension of Dubreil's Theorem H. Martin, J. Migliore In its most basic form, Dubreil's Theorem states that for an ideal I defining a codimension 2, arithmetically Cohen-Macaulay subscheme of projective n-space, the number of generators of I is bounded above by the minimal degree of a minimal generator plus 1. By introducing a new ideal J which is the complete intersection of n-1 general linear forms, we are able to extend Dubreil's Theorem to an ideal I defining a locally Cohen-Macaulay subscheme V of any codimension. Our new bound involves the lengths of the Koszul homologies of the cohomology modules of V, with respect to the ideal J, and depends on a careful identification of the module (I \cap J)/IJ in terms of the maps in the free resolution of J. As a corollary to this identification, we also give a new proof of a theorem of Serre which gives a necessary and sufficient condition to have the equality I \cap J = IJ in the case where I and J define disjoint schemes in projective space.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701395034790039, "perplexity": 282.0618688867018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320201.43/warc/CC-MAIN-20170623220935-20170624000935-00030.warc.gz"}
http://www.physicsforums.com/showpost.php?p=3687563&postcount=1
View Single Post P: 51 Hi guys, I have been trying to measure the drag characteristics of a wind tunnel model by using a pitot-static tube set up. From which the transducer will output to me the stream-wise velocity profile based on the pressure differential measured. Following which, in order to evaluate the drag coefficient of the model, i have used the common method of fitting the velocity profile using a squared hyperbolic secant function. However, i realised that across all the different profiles measured, there shoulder of the profiles do not seem to be able to fit nicely into the hyperbolic function. I would like to know if this is usually the case, or is there any other underlying assumption in the usage of hyperbolic secant functions which is not being considered here? For my case, the reynolds number of the flow is around 500k, is this an issue? thanks!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523768186569214, "perplexity": 279.17463925872573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921550.2/warc/CC-MAIN-20140901014521-00389-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/centripetal-generated-by-rotation-of-the-earth.353548/
# Centripetal generated by rotation of the Earth 1. Nov 10, 2009 ### dan667 1. The problem statement, all variables and given/known data An object weighs 100N at the South Pole. How much does it weigh at the equator? Given: Earth's rotational spin speed 465m/s Diameter of the Earth is 1.274 x 10^7 m 2. Relevant equations Fc = m x v^2 / r 3. The attempt at a solution Well I have this general understanding that the object should weigh less since it's experience uniform circular motion, and that it "wants" to fly off tangentially (newton's first law) but is constrained by the force of friction, right? It seems I have calculated the answer of the centripetal force to be 0.3 N, but what I don't understand is that why I subtract it from 100N. The confusion lies in where the normal force and centripetal force vectors are located. It would be greatly appreciated if anyone could give a lucid answer to this question. The answer by the way, is 99.7N 2. Nov 10, 2009 ### mgb_phys The object weighs 100N so there is a force of 100N due to gravity pulling it down At the equator there is a 0.3N force pushing it up, so if you put it on a pair of scales there would be 100N down and 0.3N up so the scales would read 99.7N 3. Nov 10, 2009 ### dan667 Yes, but how do you get 0.3 N pushing up? The centripetal force towards the center of earth is 0.3 N, so the normal force is also 0.3 N. If the object is at the south pole. The only two forces would be Fg pulling it towards the center of the Earth and Fn, pushing the object upwards. But now we introduce two more forces and if you were to draw (or describe) how these force vectors are used on a FBD, what would it look like? 4. Nov 10, 2009 ### mgb_phys No - the centrepetal force is outwards. Similar Discussions: Centripetal generated by rotation of the Earth
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999714255332947, "perplexity": 559.9535355162323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822116.0/warc/CC-MAIN-20171017144041-20171017164041-00594.warc.gz"}
https://mathoverflow.net/questions/25170/surjectivity-of-bilinear-forms
# Surjectivity of bilinear forms. It is not uncommon to describe interesting classes of field extensions by declaring that an extension $L|K$ belongs to that class if some type of problem with $K$-coefficiens has a property over $L$ if and only if it has the same property over $K$. I wonder about the following variant: Question A: For which field extensions $L|K$ the following is true?: Given finite dimensional $K$-vector spaces $U,V,W$, a $K$-bilinear form $\beta_K:U\times V\to W$ is surjective if and only if the corresponding $L$-bilinear form $\beta_L$ obtained by scalar extension is surjective. Already in characteristic zero an answer to that would be nice. Also, I wonder for which $L|K$ surjectivity of $\beta_K$ implies or is implied by surjectivity of $\beta_L$. One can take the geometric point of view: A bilinear form induces a map between associated projective spaces and one asks here for surjectivity of these maps on $K$ or $L$-rational points. It is not hard to show that if $L$ is the reals or the $p$-adics, then surjectivity of a bilinear form over the rationals implies surjectivity with $L$-coefficients (the argument really uses both, density and local compactness). This is the setting in which the problem originally arised. I was also asking the following Question B: Given a bilinear form $\beta$ between finite dimensional $\mathbb Q$-vector spaces, is it true that $\beta$ is surjective if and only if for all primes $p$ (including $p=\infty$) the induced $\mathbb Q_p$-bilinear form is surjective. The answer to that is negative, see Poonen's explicit example below. • Maybe I have misunderstood your question, so I will leave this as a comment. A bilinear map $\beta_K : U\times V\to W$ is (by universal property) the same as a linear map $\beta_K: U\otimes_K V \to W$. Also note that the extension-by-scalars of a tensor product is the tensor product of extensions-by-scalars. Anyway, then I think that if $\beta: X \to W$ is surjective, then any field extension is surjective, and conversely. (con't) – Theo Johnson-Freyd May 18 '10 at 23:14 • (con't) Indeed: for finite-dimensional spaces we can simply pick bases and then test whether $\beta$ is surjective by some elementary row operations, and those operations are defined over the minimum field that contains the matrix coefficients for $\beta$. In particular, the whole point of extending by scalars is that you don't change the matrix coefficients. So I think the answer to Question A is "all field extensions" and to Question B is "yes", but not for deep reasons. – Theo Johnson-Freyd May 18 '10 at 23:16 • So, since I think the question is trivial, I probably misinterpreted it. Or maybe you mean to be asking about extensions of rings? Then it is certainly nontrivial. – Theo Johnson-Freyd May 18 '10 at 23:17 • @Theo: A bilinear map induces a map on the tensor product, but they are not the same. In particular, the latter can be surjective even if the former is not. – Bjorn Poonen May 19 '10 at 1:50 • @Theo: Not every element of the tensor product space is decomposable, mathoverflow.net/questions/23478/… – Victor Protsak May 19 '10 at 2:10 The answer to Question B is no, as I'll show below. Let $U=V=\mathbf{Q}^3$ and $W=\mathbf{Q}^4$. Define $$\beta((u_1,u_2,u_3),(v_1,v_2,v_3))=(u_1 v_1,u_2 v_2,u_3 v_3, (u_1+u_2+u_3)(v_1+v_2+v_3)).$$ Claim 1: $\beta$ is not surjective. Proof: In fact, we will show that $(1,1,1,-1)$ is not in the image. If it were, then we would have a solution to $$(u_1+u_2+u_3)(u_1^{-1}+u_2^{-1}+u_3^{-1})=-1.$$ Clearing denominators leads to an elliptic curve in $\mathbf{P}^2$, but MAGMA shows that all its rational points lie in the lines where some coordinate vanishes. Claim 2: After base extension to any completion $k$ of $\mathbf{Q}$, the bilinear map $\beta$ becomes surjective. Proof: Given $(a_1,a_2,a_3,b) \in k^4$, we need to show that it is in the image. If $a_1=0$, then set $u_1=0$, $u_2=1$, $u_3=1$, $v_2=a_2$, $v_3=a_3$, and then solve for $v_1$ in the remaining constraint. The same argument works if $a_2=0$ or $a_3=0$. If $a_1,a_2,a_3$ are all nonzero, then we must find a solution to $$(u_1+u_2+u_3)(a_1 u_1^{-1}+a_2 u_2^{-1}+a_3 u_3^{-1})=b.$$ Clearing denominators leads to the equation of a projective plane curve with a smooth $k$-point $(1:-1:0)$, so by the implicit function theorem there exist nearby $k$-points with $u_1,u_2,u_3$ all nonzero, which gives us the solution we needed. $\square$ If you want a reasonable answer to Question A, I'd suggest that you try to make it more focused. • Nice! How to eliminate refering to MAGMA? – Victor Protsak May 19 '10 at 2:19 • Awesome! I will think a bit about question A and eventually ask for something precise. – Xandi Tuni May 19 '10 at 17:47 • @Victor: I used MAGMA only to compute a Weierstrass model for the Jacobian of the curve, and then to compute its group of rational points. The curve turns out to be 20A2 in Cremona's labelling, so the group of rational points can be looked up in tables instead of using MAGMA (it's Z/6Z, and the six rational points are the ones where one of u_1,u_2,u_3 is 0). This eliminates all but the computation of the Weierstrass model, and that too shouldn't be too difficult to do by hand. – Bjorn Poonen May 20 '10 at 3:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467798471450806, "perplexity": 227.96406341064306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529516.84/warc/CC-MAIN-20191210233444-20191211021444-00140.warc.gz"}
https://arxiv.org/abs/hep-th/9604044
hep-th (what is this?) # Title: Integrable Structure of Conformal Field Theory II. Q-operator and DDV equation Abstract: This paper is a direct continuation of\ \BLZ\ where we begun the study of the integrable structures in Conformal Field Theory. We show here how to construct the operators ${\bf Q}_{\pm}(\lambda)$ which act in highest weight Virasoro module and commute for different values of the parameter $\lambda$. These operators appear to be the CFT analogs of the $Q$ - matrix of Baxter\ \Baxn, in particular they satisfy famous Baxter's ${\bf T}-{\bf Q}$ equation. We also show that under natural assumptions about analytic properties of the operators ${\bf Q}(\lambda)$ as the functions of $\lambda$ the Baxter's relation allows one to derive the nonlinear integral equations of Destri-de Vega (DDV)\ \dVega\ for the eigenvalues of the ${\bf Q}$-operators. We then use the DDV equation to obtain the asymptotic expansions of the ${\bf Q}$ - operators at large $\lambda$; it is remarkable that unlike the expansions of the ${\bf T}$ operators of \ \BLZ, the asymptotic series for ${\bf Q}(\lambda)$ contains the dual'' nonlocal Integrals of Motion along with the local ones. We also discuss an intriguing relation between the vacuum eigenvalues of the ${\bf Q}$ - operators and the stationary transport properties in boundary sine-Gordon model. On this basis we propose a number of new exact results about finite voltage charge transport through the point contact in quantum Hall system. Comments: Revised version, 43 pages, harvmac.tex. Minor changes, references added Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat); Quantum Algebra (math.QA) Journal reference: Commun.Math.Phys.190:247-278,1997 DOI: 10.1007/s002200050240 Report number: CLNS 96/1405, LPTENS 96/18 Cite as: arXiv:hep-th/9604044 (or arXiv:hep-th/9604044v2 for this version) ## Submission history From: Sergei Lukyanov [view email] [v1] Mon, 8 Apr 1996 21:55:15 GMT (34kb) [v2] Wed, 17 Apr 1996 14:37:52 GMT (34kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949012160301208, "perplexity": 1586.2030633855682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00256-ip-10-31-129-80.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/57595/probability-amplitude-in-laymans-terms/57602
Probability amplitude in Layman's Terms I am basically a Computer Programmer, but Physics has always fascinated and often baffled me. I have tried to understand probability density in Quantum Mechanics for many many years. What I understood is that Probability amplitude is the square root of the probability of finding an electron around a nucleus. But the square root of probability does not mean anything in the physical sense. Can any please explain the physical significance of probability amplitude in Quantum Mechanics? I read the Wikipedia article on probability amplitude many times over. What are those dumbbell shaped images representing? - Before trying to understand quantum mechanics proper, I think it's helpful to try to understand the general idea of its statistics and probability. There are basically two kinds of mathematical systems that can yield a nontrivial formalism for probability. One is the kind we're familiar with from everyday life: each outcome has a probability, and those probabilities directly add up to 100%. A coin has two sides, each with 50% probability. $50\% + 50\% = 100\%$, so there you go. But there's another system of probability, very different from what you and I are used to. It's a system where each event has an associated vector (or complex number), and the sum of the squared magnitudes of those vectors (complex numbers) is 1. Quantum mechanics works according to this latter system, and for this reason, the complex numbers associated with events are what we often deal with. The wavefunction of a particle is just the distribution of these complex numbers over space. We have chosen to call these numbers the "probability amplitudes" merely as a matter of convenience. The system of probability that QM follows is very different from what everyday experience would expect us to believe, and this has many mathematical consequences. It makes interference effects possible, for example, and such is only explainable directly with amplitudes. For this reason, amplitudes are physically significant--they are significant because the mathematical model for probability on the quantum scale is not what you and I are accustomed to. Edit: regarding "just extra stuff under the hood." Here's a more concrete way of talking about the difference between classical and quantum probability. Let $A$ and $B$ be mutually exclusive events. In classical probability, they would have associated probabilities $p_A$ and $p_B$, and the total probability of them occurring is obtained through addition, $p_{A \cup B} = p_A + p_B$. In quantum probability, their amplitudes add instead. This is a key difference. There is a total amplitude $\psi_{A \cup B} = \psi_A + \psi_B$. and the squared magnitude of this amplitude--that is, the probability--is as follows: $$p_{A \cup B} = |\psi_A + \psi_B|^2 = p_A + p_B + (\psi_A^* \psi_B + \psi_A \psi_B^*)$$ There is an extra term, yielding physically different behavior. This quantifies the effects of interference, and for the right choices of $\psi_A$ and $\psi_B$, you could end up with two events that have nonzero individual probabilities, but the probability of the union is zero! Or higher than the individual probabilities. - I'm not too happy with the formulation of "mathematical systems that can yield a nontrivial formalism for probability." Firstly, becuase it sounds like you imply that there are only these two "systems", and secondly, because the quantum framework is still one where "each outcome has a probability, and those probabilities directly add up to 100%." It's just extra dynamics under the hood. –  NiftyKitty95 Mar 21 '13 at 16:13 There are only these two systems. It is mathematically proven that you couldn't have, say, an amplitude that must be raised to the 4th power. There is only classical probability as we know it and the quantum kind. It's not just extra stuff under the hood, either. See my edit. –  Muphrid Mar 21 '13 at 16:28 Whatever is mathematically proven must be w.r.t. some postulates and these are not stated. Also, there are the observable who's probabilities sum to 100% (namely the probability to be in any of a total set of eigenstates) and in this sense it's just probability theory with complex dynamics under the hood. I still don't think this is an inappropriate formulation. –  NiftyKitty95 Mar 21 '13 at 18:23 Part of you problem is "Probability amplitude is the square root of the probability [...]" The amplitude is a complex number whose amplitude is the probability. That is $\psi^* \psi = P$ where the asterisk superscript means the complex conjugate.1 It may seem a little pedantic to make this distinction because so far the "complex phase" of the amplitudes has no effect on the observables at all: we could always rotate any given amplitude onto the positive real line and then "the square root" would be fine. But we can't guarantee to be able to rotate more than one amplitude that way at the same time. More over, there are two ways to combine amplitudes to find probabilities for observation of combined events. • When the final states are distinguishable you add probabilities: $P_{dis} = P_1 + P_2 = \psi_1^* \psi_1 + \psi_2^* \psi_2$. • When the final state are indistinguishable,2 you add amplitudes: $\Psi_{1,2} = \psi_1 + \psi_2$, and $P_{ind} = \Psi_{1,2}^*\Psi_{1,2} = \psi_1^*\psi_1 + \psi_1^*\psi_2 + \psi_2^*\psi_1 + \psi_2^*\psi_2$. The terms that mix the amplitudes labeled 1 and 2 are the "interference terms". The interference terms are why we can't ignore the complex nature of the amplitudes and they cause many kinds of quantum weirdness. 1 Here I'm using a notation reminiscent of a Schrödinger-like formulation, but that interpretation is not required. Just accept $\psi$ as a complex number representing the amplitude for some observation. 2 This is not precise, the states need to be "coherent", but you don't want to hear about that today. - In quantum mechanics, the amplitue $\psi$, and not the propability $|\psi|^2$, is the quantity which admits the superposition principle. Notice that the dynamics of the physical system (Schrödinger equation) is formulated in terms of and is linear in the evolution of this object. Observe that working with superposition of $\psi$ also permits complex phases $e^{i\theta}$ to play a role. In the same spirit, the overlap of two systems is computed by investigation of the overlap of the amplitudes. - All you say is factually correct, but since the question asked for an explanation in layman's terms I think there needs to be more explanation. –  user9886 Mar 21 '13 at 16:21 @user9886: The integrals involving position operators are layman's terms? –  NiftyKitty95 Mar 21 '13 at 18:11 What is the benefit in using complex phases rather than just sine and cosine? –  wrongusername Feb 27 at 2:57 In quantum mechanics a particle is described by its wave-function $\psi$ (in spatial representation it would for example be $\psi(x,t)$, but I omit the arguments in the following). Observables, like the position $x$ are represented by operators $\hat x$. The mean value of the position of an particle is calculated as $$\int \mathrm{d}x \tilde \psi \hat x \psi.$$ Since $\hat x$ applied to $\psi(x,t)$ just gives the position $x$ times $\psi(x,t)$ we can write the integral as $$\int \mathrm{d}x x \tilde \psi \psi.$$ $\tilde \psi$ is the complex conjugate of $\psi$ and therefore $\tilde \psi \psi=|\psi|^2$. And finally, since a mean value is usually computed as an integral over the variable times a probability distribution $\rho$ as $$\langle X \rangle_\rho=\int \mathrm{d}X X \rho(X)$$ $|\psi|^2$ can be interpreted as a probability density of finding the particle at some point. E.g. The probability of it being between $a$ and $b$ is $$\int_a^b\mathrm{d}x|\psi|^2$$ So the wave function (which is the solution to the Schrödinger equation that describes the system in question) is a probability amplitude in the sense of the first sentence of the article you linked. Lastly, the dumbbell shows the area in space where $|\psi|^2$ is larger than some very small number, so basically the regions, where it is not unlikely to find the electron. - I agree with the other answers provided. However, you may find the probability amplitudes more intuitive in the context of the Feynman path integral approach. Suppose a particle is created at the location $x_1$ at time $0$ and that you want to know the probability for observing it later at some position $x_2$ at time $t$. Every path $P$ that starts at $x_1$ at time zero and ends at $x_2$ at time $t$ is associated with a (complex) probability amplitude $A_P$. Within the path integral approach, the total amplitude for the process initially described is given by the sum of all these amplitudes: $A_{\textrm{total}} = \sum_P A_P$ I.e. the sum over all possible paths the particle could take between $x_1$ and $x_2$. These paths interfere coherently, and the probability for observing the particle at $x_2$ at time $t$ is given by the square of the total amplitude: $\textrm{probability to observe the particle at$x_2$at time$t$} = |A_{\textrm{total}}|^2 = |\sum_P A_P|^2$ I should note that the Feynman path integral formalism (described above) is actually a special case of a more general approach wherein the amplitudes are associated with processes rather than paths. Also, a good reference for this is volume 3 of The Feynman Lectures. - Have a look at this simplified statement in describing the behavior of a particle in a potential problem: In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density. This complex number comes from a solution of a quantum mechanical equation with the boundary conditions of the problem, usually a Schroedinger equation, whose solutions are the "wavefunctions" ψ(x), where x represents the coordinates generically for this argument. The values taken by a normalized wave function ψ at each point x are probability amplitudes, since |ψ(x)|2 gives the probability density at position x. To get from the complex numbers to a probability distribution, the probability of finding the particle, we have to take the complex square of the wavefunction ψ*ψ . So the "probability amplitude" is an alternate definition/identification of "wavefunction", coming after the fact, when it was found experimentally that ψ*ψ gives a probability density distribution for the particle in question. First one computes ψ and then one can evaluate the probability density ψ*ψ, not the other way around. The significance of ψ is that it is the result of a computation. I agree it is confusing for non physicists who know probabilities from statistics. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528852701187134, "perplexity": 289.7538261379857}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/036U
Lemma 35.29.1. The property $\mathcal{P}(f)=$“$f$ is smooth” is smooth local on the source. Proof. Combine Lemma 35.25.4 with Morphisms, Lemma 29.34.2 (local for Zariski on source and target), Morphisms, Lemma 29.34.4 (pre-composing), and Lemma 35.13.4 (part (4)). $\square$ There are also: • 2 comment(s) on Section 35.29: Properties of morphisms local in the smooth topology on the source In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.991481602191925, "perplexity": 3430.9904248097287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00393.warc.gz"}
https://byjus.com/question-answer/abscissa-of-all-points-on-the-x-axis-is-a-0-b-1-c-2/
Question Abscissa of all points on the x-axis is (a) 0 (b) 1 (c) 2 (d) any number Open in App Solution Disclaimer: The answer has been provided for the following question. Ordinate of all points on the x-axis is (a) 0 (b) 1 (c) 2 (d) any number Solution: If we take any point on the x-axis, then the distance of this point from the x-axis is 0. Therefore, the ordinate of this point is 0. The co-ordinate of a point on the x-axis are of the form (x, 0). So, the ordinate of all points on the x-axis is 0. Hence, the correct answer is option (a). Suggest Corrections 1 Related Videos Plotting on a Graph MATHEMATICS Watch in App Explore more
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584769368171692, "perplexity": 225.12178949849476}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00710.warc.gz"}
https://papers.nips.cc/paper/2019/file/f39ae9ff3a81f499230c4126e01f421b-MetaReview.html
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center Paper ID: 1800 Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks The paper studies the dynamics of discrete gradient descent for overparametrized two-layer neural networks and shows that under certain conditions on the input/output covariance matrices and the initialization the components of the input-output map are learned sequentially. The reviewers appreciated the contributions of the paper, both theory and experiments, and found the paper well written. At the same time, one reviewer feels the assumptions are too strong, and another one feels that some claims are misleading (e.g. having deep in the title) and that the contributions relative an un-cited paper by Lampinen and Ganguli are incremental. Post rebuttal, the reviewer concluded that the novelty of the paper is buried in the appendix, and that a re-write of the paper is needed to elucidate that novelty in the body of the paper. This AC agrees with R4 that the contributions relative to Lampinen and Ganguli need to be clearly established in the body of the paper and that a citation needs to be added. This AC also agrees that the title/abstract/body need to be changed to reflect that a shallow network with squared loss is being analyzed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262636065483093, "perplexity": 1000.8670306063163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00500.warc.gz"}
https://www.albert.io/ie/ap-calculus-ab-bc/integral-of-a-transformed-function
Free Version Moderate # Integral of a Transformed Function APCALC-DEPJKL Let $f$ be an everywhere differentiable function such that $\int_{4}^{8}f(2x)dx=6$. Which of the following is true? A $\int_{8}^{16}f(x)dx=12$ B $\int_{8}^{16}f(x)dx=3$ C $\int_{2}^{4}f(x)dx=3$ D $\int_{2}^{4}f(x)dx=12$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936941266059875, "perplexity": 737.4393034651557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00197-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.tse-fr.eu/fr/articles/competition-two-sided-markets-common-network-externalities
Article # Competition in two-sided markets with common network externalities ## Résumé We study competition in two sided markets with common network externality rather than with the standard inter-group effects. This type of externality occurs when both groups benefit, possibly with different intensities, from an increase in the size of one group and from a decrease in the size of the other. We explain why common externality is relevant for the health and education sectors. We focus on the symmetric equilibrium and show that when the externality itself satisfies an homogeneity condition then platforms’ profits and price structure have some specific properties. Our results reveal how the rents coming from network externalities are shifted by platforms from one side to other, according to the homogeneity degree. In the specific but realistic case where the common network externality is homogeneous of degree zero, platform's profit do not depend on the intensity of the (common) network externality. This is in sharp contrast to conventional results stating that the presence of network externalities in a two-sided market structure increases the intensity of competition when the externality is positive (and decreases it when the externality is negative). Prices are affected but in such a way that platforms only transfer rents from consumers to providers. ## Codes JEL • D42: Monopoly • L11: Production, Pricing, and Market Structure • Size Distribution of Firms • L12: Monopoly • Monopolization Strategies ## Remplace , , TSE Working Paper, n° 09-103, octobre 2009, révision octobre 2010. ## Référence , , Review of Industrial Organization, vol. 44, juin 2014, p. 327–359. ## Publié dans Review of Industrial Organization, vol. 44, juin 2014, p. 327–359
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163242936134338, "perplexity": 2867.1741329177135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00255.warc.gz"}
https://www.physicsforums.com/threads/can-someone-explain.14066/
# Can someone explain? 1. Feb 8, 2004 ### Rebel_Yell How do you set up equations for sum of forces in the x and y directions? I can do the y when it is only moving in the x direction, but I don't know how to set up the x equation. 2. Feb 8, 2004 ### HallsofIvy Staff Emeritus Yes, of course, you can do the "y direction when it is only moving in the x direction" because that's trivial! I'm not sure what you really meant. Normally one starts with "movement in a line" where we only have to worry about one direction. If something is moving on the x-axis with a force in the x-direction then vx= at+ v0 and x= (1/2)at2+ v0t+ x0. Now you have motion in the plane (or 3 dimensions with x, y, z) so you have to separate everything into x, y, z components. If you are given "initial velocity" or "force vector" as length and direction, then you need to use trigonometry to decompose into x and y components. For example if the problem says "the initial speed is 20m/s at a direction 30 degrees above the x-axis, draw a picture showing a line at 30 degrees above the x-axis. Mark off a "length" of 20 on that line and draw a perependicular from that point to the x-axis. You see a right triangle with hypotenuse of length 20, "near side" along the x-axis (so its length is x), and "opposite side" parallel to the y-axis (so its length is y). Remembering that "sine" is defined as "opposite over hypotenus", sin(30)= y/20 or y= 20sin(30). Remembering that "cosine" is defined as "near side over hypotenuse", cos(30)= x/20 or x= 30 sin (30). Once you have those components, you can just work with "x" and "y" separately- that's the nice thing about using components.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216372728347778, "perplexity": 621.4626847858334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00297-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/general-questions.215487/
# General Questions 1. Feb 14, 2008 ### moorec32 I'm having trouble answering these questions and i was curious to see how other would answer them: If the potential at a point is zero, must the electric field also be zero? (No) Give an example. How does the energy stored in a capacitor change when a dielectric is inserted if (a) the capacitor is isolated so Q doesn't change, (b) the capacitor remains connected to a battery so V doesn't change. If there is a point along the line joining two equal positive charges where the electric field is zero? Where the electric potential is zero? Explain. 2. Feb 15, 2008 ### Shooting Star First you should give your answers with your attempts at the explanation, so that we can understand your thoughts on these. Then you'll get all the help needed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707932233810425, "perplexity": 423.80114348236185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00493-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/318357/sum-k-1n-1kp-lt-frac-np1-p1-lt-sum-k-1nkp
# $\sum _{k=1}^{n-1}k^p \lt \frac {n^{p+1}} {p+1} \lt \sum _{k=-1}^{n}k^p$ I'm going through a proof of the theorem that says $\int_0^bx^pdx = \frac {b^{p+1}}{p+1}$, and it begins with the inequality. $\sum _{k=1}^{n-1}k^p \lt \frac {n^{p+1}} {p+1} \lt \sum _{k=-1}^{n}k^p$ What I'm having trouble understanding where this middle term came from. - induction doesn't help? –  Arjang Mar 2 '13 at 2:33 I'm not trying to prove it. I'm trying to see where it came from. I know that the left and right side are partitions of the function $x^p$ on the interval $[0,b]$, but what I'm trying to figure out is where the middle equality came from. –  AlexHeuman Mar 2 '13 at 2:35 The induction comment was meant toward showing the inequalities. –  Alex R. Mar 2 '13 at 2:37 looks like the upper and lower bounds for a Riemann Integral, no? –  Arjang Mar 2 '13 at 2:37 The middle term seems to be the integral, the left and right terms are the bounding rectangles, one bounds the integral from above and the other one from below, so when the integral in the middle is evaluated to the term in middle, it will be bounded above and below by the same values. Have you seen a picture of Rieman integral with rectangles above and below the function? –  Arjang Mar 2 '13 at 2:42 Since this inequality is being used to prove an integral, perhaps a non-calculus proof would be best. At the end of this answer is a simple inductive proof of Bernoulli's Inequality: For non-negative integer $n$ and $x\ge-1$, $$1+nx\le(1+x)^n\tag{1}$$ Applying $(1)$ \begin{align} 1-\frac{p+1}{k}&\le\left(1-\frac1k\right)^{p+1}\\ k^{p+1}-(p+1)k^p&\le(k-1)^{p+1}\\ k^{p+1}-(k-1)^{p+1}&\le(p+1)k^p\tag{2} \end{align} Applying $(1)$ \begin{align} 1+\frac{p+1}{k-1}&\le\left(1+\frac1{k-1}\right)^{p+1}\\ \frac{p+1}{k-1}&\le\left(1+\frac1{k-1}\right)^{p+1}-1\\ (p+1)(k-1)^p&\le k^{p+1}-(k-1)^{p+1}\tag{3} \end{align} Combining $(2)$ and $(3)$, $$(p+1)(k-1)^p\le k^{p+1}-(k-1)^{p+1}\le(p+1)k^p\tag{4}$$ Summing $(4)$ from $1$ to $n$ and dividing by $p+1$ yields $$\sum_{k=0}^{n-1}k^p\le\frac{n^{p+1}}{p+1}\le\sum_{k=1}^nk^p\tag{5}$$ Note that for $p=0$, we have equalty in $(5)$. I've added a strict case to the proof of Bernoulli that, when replacing $(1)$ for $n\ge2$, yields that for $p\ge1$, we have strict inequality: $$\sum_{k=0}^{n-1}k^p\lt\frac{n^{p+1}}{p+1}\lt\sum_{k=1}^nk^p\tag{6}$$ - I didn't realize I was going to get something slightly different of what you're being given when I started to write this, but I leave you an idea to prove that $$\int_0^a x^pdx=\frac{a^{p+1}}{p+1}$$ The idea is the following: PROP Let $$A(n)=\sum_{k=1}^n k^p$$ Then $A(n)=\frac{n^{p+1}}{p+1}+P_p(n)$ where $\operatorname{deg}P\leq p$ PROFF By induction on $p$. For $p=1$, we obtain $$\sum\limits_{k = 1}^n k = {{{n^2}} \over 2} + {n \over 2}$$ Assume true for $m\leq p-1$ and consider $p$. We exploit the fact that $${(k + 1)^{p+1}} - {k^{p+1}} = \sum\limits_{m = 0}^{p} {p+1\choose m}{{k^m}}$$ We then have, after summing from $k=1$ to $p$ that $${(n + 1)^{p + 1}} - 1 = \sum\limits_{m = 0}^p {\left( \matrix{ p + 1 \cr m \cr} \right)} \sum\limits_{k = 1}^n {{k^m}}$$ or $${(n + 1)^{p + 1}} - 1 = \sum\limits_{m = 0}^{p - 1} {\left( \matrix{ p + 1 \cr m \cr} \right)} \sum\limits_{k = 1}^n {{k^m}} + \left( {p + 1} \right)\sum\limits_{k = 1}^n {{k^p}}$$ Now, our inductive hypothesis is that $$\sum\limits_{k = 1}^n {{k^m}} = {{{n^{m + 1}}} \over {m + 1}} + {P_m}\left( n \right)$$ This gives us that $${{{{(n + 1)}^{p + 1}}} \over {p + 1}} - {1 \over {p + 1}} - {1 \over {p + 1}}\sum\limits_{m = 0}^{p - 1} {\left( \matrix{ p + 1 \cr m \cr} \right)} \left( {{{{n^{m + 1}}} \over {m + 1}} + {P_m}\left( n \right)} \right) = \sum\limits_{k = 1}^n {{k^p}}$$ Now, let's look at that big mess above. Observe that by the binomial theorem, we can write $${{{{(n + 1)}^{p + 1}}} \over {p + 1}} = {{{n^{p + 1}}} \over {p + 1}} + {Q_p}\left( n \right)$$ where the degree of $Q_p$ is $p$. Note also that $$\sum\limits_{m = 0}^{p - 1} {\left( \matrix{ p + 1 \cr m \cr} \right)} \left( {{{{n^{m + 1}}} \over {m + 1}} + {P_m}\left( n \right)} \right) = \sum\limits_{m = 0}^{p - 1} {\left( \matrix{ p + 1 \cr m \cr} \right)} \left( {{{{n^{m + 1}}} \over {m + 1}}} \right) + \sum\limits_{m = 0}^{p - 1} {\left( \matrix{ p + 1 \cr m \cr} \right)} {P_m}\left( n \right)$$ The first term on the right is a polynomial of degree $p$. On the other hand, the sum of all the polynomials on the left is of degree $\leq p$, since it's a sum of polynomials of degree at most $p$. All in all, we may write $${{{{(n + 1)}^{p + 1}}} \over {p + 1}} - {1 \over {p + 1}} - {1 \over {p + 1}}\sum\limits_{m = 0}^{p - 1} {\left( \matrix{ p + 1 \cr m \cr} \right)} \left( {{{{n^{m + 1}}} \over {m + 1}} + {P_m}\left( n \right)} \right) = {{{n^{p + 1}}} \over {p + 1}} + {W_p}\left( n \right)$$ where $W$ has degree at most $p$. Thus $$\sum\limits_{k = 1}^n {{k^p}} = {{{n^{p + 1}}} \over {p + 1}} + {W_p}\left( n \right)$$ and the proposition is proven. Now, we look at our integral approximation, which is $${1 \over n}\sum\limits_{k = 1}^n {{{\left( {{{ak} \over n}} \right)}^p}}$$ Because of the above, we have that $${1 \over n}\sum\limits_{k = 1}^n {{{\left( {{{ak} \over n}} \right)}^p}} = {{{a^{p + 1}}} \over {p + 1}} + {a^{p + 1}}{{{W_p}\left( n \right)} \over {{n^{p + 1}}}}$$ Since $W_p(n)$ has degree at most $p$, it follows that $${{{W_p}\left( n \right)} \over {{n^{p + 1}}}}\to 0$$ when $n\to \infty$, so we conclude that $$\mathop {\lim }\limits_{n \to \infty } {1 \over n}\sum\limits_{k = 1}^n {{{\left( {{{ak} \over n}} \right)}^p}} = {{{a^{p + 1}}} \over {p + 1}}$$ - We use the fact that $$a^p-b^p=(a-b)\sum_{k=0}^{p-1}a^kb^{p-k-1}$$ Because of that, we have $$\tag 1 n^p<\frac{(n+1)^{p+1}-n^{p+1}}{p+1}<(n+1)^p$$ Indeed, by the above $${\left( {n + 1} \right)^{p + 1}} - {n^{p + 1}} = \sum\limits_{k = 0}^p {{n^k}} {\left( {n + 1} \right)^{p - k}}$$ that is $${\left( {n + 1} \right)^{p + 1}} - {n^{p + 1}} = \sum\limits_{k = 0}^p {{{\left( {{n \over {n + 1}}} \right)}^k}} {\left( {n + 1} \right)^p}$$ but for any choice of $n$ $${n \over {n + 1}} < 1$$ so that$${\left( {n + 1} \right)^{p + 1}} - {n^{p + 1}} = \sum\limits_{k = 0}^p {{{\left( {{n \over {n + 1}}} \right)}^k}} {\left( {n + 1} \right)^p} < \sum\limits_{k = 0}^p {{{\left( {n + 1} \right)}^p} = \left( {p + 1} \right)} {\left( {n + 1} \right)^p}$$ and one side is done. But similarily, $${{n + 1} \over n} > 1$$ for any choice of $n$ so $$\left( {p + 1} \right){n^p} = \sum\limits_{k = 0}^p {{n^p}} < \sum\limits_{k = 0}^p {{{\left( {{{n + 1} \over n}} \right)}^{p - k}}{n^p}} = \sum\limits_{k = 0}^p {{{\left( {n + 1} \right)}^{p - k}}{n^k}} = {\left( {n + 1} \right)^{p + 1}} - {n^{p + 1}}$$ and we get what we wanted. What you want follows from summing telescopically the inequalities in $(1)$. - Could you not just say that $n^p\le n^k(n+1)^{p-k}\le(n+1)^p$? Otherwise, it looks good (+1) –  robjohn Mar 16 '13 at 16:39 For $p>0$ this can be proved easily, $\sum _{k=1}^{n-1}k^p < \frac {n^{p+1}} {p+1} < \sum _{k=-1}^{n}k^p$ Suppose this is true for $n=m$ Then for $n=m+1$ we have, $\sum _{k=1}^{m}k^p = \sum _{k=1}^{m-1}k^p+m^p$ If we can show that $m^p<\displaystyle \frac{(m+1)^{p+1}-m^{p+1}}{p+1}$ then we are done. This can easily be proved using mean value theorem, Consider the function $f(x)=x^{p+1}$, Applying mean value theorem, $\exists c\in(m,m+1)$ such that , $\displaystyle \frac{(m+1)^{p+1}-m^{p+1}}{m+1-m}=f'(c)$ $\displaystyle \frac{(m+1)^{p+1}-m^{p+1}}{m+1-m}=(p+1)c^p>(p+1)m^p$ $\displaystyle \Rightarrow \frac{(m+1)^{p+1}-m^{p+1}}{p+1}>m^p$ The right side can also be proved similarly. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933390617370605, "perplexity": 196.81854142511952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00310-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-9-problem-90ap-college-physics-11th-edition/9781305952300/oil-having-a-density-of-930-kgm3-floats-on-water-a-rectangular-block-of-wood-400-cm-high-and-with/d2c4f863-98d8-11e8-ada4-0ee91056875a
Chapter 9, Problem 90AP ### College Physics 11th Edition Raymond A. Serway + 1 other ISBN: 9781305952300 Chapter Section ### College Physics 11th Edition Raymond A. Serway + 1 other ISBN: 9781305952300 Textbook Problem # Oil having a density of 930 kg/m3 floats on water. A rectangular block of wood 4.00 cm high and with a density of 960 kg/m3 floats partly in the oil and partly in the water. The oil completely covers the block. How far below the interface between the two liquids is the bottom of the block? To determine The distance of the bottom of the block below the interface between the two liquids. Explanation The buoyant force on the floating block must be equal to the weight of the block that is the density of the oil is ρoil=[A(dx)]g+ρwater[Ax]g=ρwood(Ad)g where A is the surface area of the top or bottom of the rectangular block and solving this equation, we can calculate the distance as x=(ρwoodρoil/ρwaterρoil)d . Given info: The density of oil is 930kg/m3 , the height of the block is 4.00cm , the density of the wood is 960kg/m3 , and the density of water is 1.00×103kg/m3 . The formula for the distance of the bottom of the block below the interface between the two liquids is, x=(ρwoodρoilρwaterρoil)d • ρwood is density of wood ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021489024162292, "perplexity": 1105.7742718540235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00538.warc.gz"}
http://mathhelpforum.com/differential-geometry/169577-topology-specifically-homotopy-question.html
# Math Help - Topology (specifically homotopy) question! 1. ## Topology (specifically homotopy) question! Could anybody help me with this topology question? i) Prove that every map e: X-> R^n is homotopic to a constant map. ii) If f: X->S^n is a map that is not onto (surjective), show that f is homtopic to a constant map. It's part of a past exam paper but it does not come with solutions. Any help on the solution would be greatly appreciated. Thanks. 2. For (i), try to show that e is homotopic to the zero map. It should be rather clear, geometrically, how to do this. For (ii), note that $S^n \setminus \{x_0\} \cong \mathbb{R}^n$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150415062904358, "perplexity": 862.503957441813}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162938.42/warc/CC-MAIN-20160205193922-00029-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.dsprelated.com/blogs-1/nf/Qasim_Chaudhari.php
## There and Back Again: Time of Flight Ranging between Two Wireless Nodes With the growth in the Internet of Things (IoT) products, the number of applications requiring an estimate of range between two wireless nodes in indoor channels is growing very quickly as well. Therefore, localization is becoming a red hot market today and will remain so in the coming years. One question that is perplexing is that many companies now a days are offering cm level accurate solutions using RF signals. The conventional wireless nodes usually implement synchronization... ## A Beginner's Guide to OFDM In the recent past, high data rate wireless communications is often considered synonymous to an Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is a special case of multi-carrier communication as opposed to a conventional single-carrier system. The concepts on which OFDM is based are so simple that almost everyone in the wireless community is a technical expert in this subject. However, I have always felt an absence of a really simple guide on how OFDM works which can... ## Minimum Shift Keying (MSK) - A Tutorial Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard. MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a... ## Some Thoughts on Sampling Some time ago, I came across an interesting problem. In the explanation of sampling process, a representation of impulse sampling shown in Figure 1 below is illustrated in almost every textbook on DSP and communications. The question is: how is it possible that during sampling, the frequency axis gets scaled by $1/T_s$ -- a very large number? For an ADC operating at 10 MHz for example, the amplitude of the desired spectrum and spectral replicas is $10^7$! I thought that there must be... ## Minimum Shift Keying (MSK) - A Tutorial Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard. MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a... ## A Beginner's Guide to OFDM In the recent past, high data rate wireless communications is often considered synonymous to an Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is a special case of multi-carrier communication as opposed to a conventional single-carrier system. The concepts on which OFDM is based are so simple that almost everyone in the wireless community is a technical expert in this subject. However, I have always felt an absence of a really simple guide on how OFDM works which can... ## Some Thoughts on Sampling Some time ago, I came across an interesting problem. In the explanation of sampling process, a representation of impulse sampling shown in Figure 1 below is illustrated in almost every textbook on DSP and communications. The question is: how is it possible that during sampling, the frequency axis gets scaled by $1/T_s$ -- a very large number? For an ADC operating at 10 MHz for example, the amplitude of the desired spectrum and spectral replicas is $10^7$! I thought that there must be...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091979026794434, "perplexity": 641.6986286169284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00267.warc.gz"}
https://www.arxiv-vanity.com/papers/astro-ph/0509875/
New H-band Galaxy Number Counts: A Large Local Hole in the Galaxy Distribution? W.J. Frith, N. Metcalfe & T. Shanks Dept. of Physics, Univ. of Durham, South Road, Durham DH1 3LE, UK E-mail: Accepted 2005. Received 2005; in original form 2005 Abstract We examine -band number counts determined using new photometry over two fields with a combined solid angle of 0.30 deg to , as well as bright data () from the 2 Micron All Sky Survey (2MASS). First, we examine the bright number counts from 2MASS extracted for the 4000 deg APM survey area situated around the southern galactic pole. We find a deficiency of 25 per cent at with respect to homogeneous predictions, in line with previous results in the -band and -band. In addition we examine the bright counts extracted for 20 (covering 27000 deg); we find a relatively constant deficit in the counts of 15-20 per cent to . We investigate various possible causes for these results; namely, errors in the model normalisation, unexpected luminosity evolution (at low and high redshifts), errors in the photometry, incompleteness and large-scale structure. In order to address the issue of the model normalisation, we examine the number counts determined for the new faint photometry presented in this work and also for faint data (20) covering 0.39 deg from the Las Campanas Infra Red Survey (LCIRS). In each case a zeropoint is chosen to match that of the 2MASS photometry at bright magnitudes using several hundred matched point sources in each case. We find a large offset between 2MASS and the LCIRS data of 0.280.01 magnitudes. Applying a consistent zeropoint, the faint data, covering a combined solid angle of 0.69 deg, is in good agreement with the homogeneous prediction used previously, with a best fit normalisation a factor of 1.095 higher. We examine possible effects arising from unexpected galaxy evolution and photometric errors and find no evidence for a significant contribution from either. However, incompleteness in the 2MASS catalogue ( per cent) and in the faint data (likely to be at the few  per cent level) may have a significant contribution. Addressing the contribution from large-scale structure, we estimate the cosmic variance in the bright counts over the APM survey area and for 20 expected in a CDM cosmology using 27 mock 2MASS catalogues constructed from the CDM Hubble Volume simulation. Accounting for the model normalisation uncertainty and taking an upper limit for the effect arising from incompleteness, the APM survey area bright counts are in line with a rare fluctuation in the local galaxy distribution of . However, the 20 counts represent a fluctuation, and imply a local hole which extends over the entire local galaxy distribution and is at odds with CDM. The increase in faint near infrared data from the UK Infrared Deep Sky Survey (UKIDSS) should help to resolve this issue. keywords: galaxies: photometry - cosmology: observations - large-scale structure of the Universe - infrared: galaxies pagerange: New H-band Galaxy Number Counts: A Large Local Hole in the Galaxy Distribution?New H-band Galaxy Number Counts: A Large Local Hole in the Galaxy Distribution?pubyear: 2005 1 Introduction A recurring problem arising from the study of bright galaxy number counts has been the measured deficiency of galaxies around the southern galactic pole. This was first examined in detail by Shanks (1990) and subsequently by the APM galaxy survey (Maddox et al., 1990a), which observed a large deficit in the number counts (50 per cent at 16, 30 per cent at 17) over a 4000 deg solid angle. If this anomaly was due solely to features in the galaxy distribution, this would be at odds with recent measurements of the variance of local galaxy density fluctuations (e.g. Hawkins et al., 2003; Frith et al., 2005b; Cole et al., 2005) or the expected linear growth of density inhomogeneities at large scales. Maddox et al. (1990b) examined possible causes of this deficiency. From redshift survey results over the APM survey area (Loveday et al., 1992), it was argued that a weak local under-density contributed to the observed deficiency at the 10 per cent level at 17. Instead, Maddox et al. (1990b) suggested that strong low redshift galaxy evolution was the dominant contribution. This phenomenon has also been suggested as a possible explanation for large deficiencies in the Sloan Digital Sky Survey (SDSS; Loveday, 2004), although models without such strong low redshift evolution provide predictions consistent with observed number redshift distributions (e.g. Broadhurst et al., 1988; Colless et al., 1990; Hawkins et al., 2003). In contrast, Shanks (1990) argued that evolution could not account for the observed slope and that large-scale structure was the principal cause of the deficiency in the counts. However, another possible contribution to the low counts might be errors in the APM photometry. Comparing the photographic APM photometry with -band CCD data, Metcalfe et al. (1995) detected a small residual scale error in the APM survey zeropoints for 17. Correcting for this offset, the counts were now in good agreement with homogeneous predictions at faint magnitudes (17.5); however, the problematic deficiency at brighter magnitudes remained. More recently, Busswell et al. (2004) used -band CCD data over 337 deg within the APM survey area to provide the most accurate comparison to date with a sample of the APM survey photometry. The photometric zeropoint of this CCD data was in excellent agreement with the Millennium Galaxy Catalogue (Driver, 2003) and the Sloan Digital Sky Survey Early Data Release (Yasuda et al., 2001). However, a comparison with the APM photometry suggested a large offset of 0.31 magnitudes for 17.35. Applying this to the APM survey counts, a deficiency of 25 per cent remained at 16; Busswell et al. (2004) determined that such a deficiency in the local galaxy distribution would still be at odds with a CDM form to the galaxy correlation function and power spectrum at large scales. In order to examine this issue independently, bright number counts have also been examined in the near infrared (Frith et al., 2003, 2004, 2005a). These wavelengths are particularly useful for such analysis as the number count predictions are fairly insensitive to the evolutionary model or the assumed cosmology at bright magnitudes (see Fig. 1); current observations are in remarkable agreement with predictions in the -band to 23 for example (McCracken et al., 2000) In particular, Frith et al. (2005a) examined -band number counts selected from the 2 Micron All Sky Survey (2MASS; Jarrett, 2004). First, the counts over the APM survey area were determined; a similar deficiency was observed to the APM survey counts (with the zeropoint offset determined by Busswell et al. (2004) applied), with a 25 per cent deficit at 12 compared to the no evolution model of Metcalfe et al. (2001). Using a CDM form for the angular correlation function at large scales and assuming the observed counts were solely due to features in the local galaxy distribution, the observed counts represented a fluctuation. However, this result was complicated by the fact that the 2MASS -band number counts for almost the entire survey (20, covering 27000 deg) were also low, with a constant deficiency of 20 per cent between 10 and 13.5. Did this surprising result perhaps indicate that the -band Metcalfe et al. (2001) model normalisation was too high? Or, as suggested previously, could low redshift luminosity evolution significantly affect the bright counts? These issues were also addressed by Frith et al. (2005a): First, the Metcalfe et al. (2001) model was compared with faint -band data collated from the literature. Fitting in the magnitude range 1418 it was found that the best fit model normalisation was slightly too high, although not significantly (this magnitude range was used so as to avoid fluctuations in the counts arising from large-scale structure at bright magnitudes and significant effects from galaxy evolution at the faint end). Accounting for the normalisation uncertainty (of 6 per cent) the observed deficiency in the -band counts over the APM survey area still represented a fluctuation. Second, the issue of low redshift luminosity evolution was also addressed: 2MASS galaxies below were matched with the Northern and Southern areas of the 2dF Galaxy Redshift Survey (2dGRS; Colless et al., 2001). The resulting , covering deg in total, was consistent with the no evolution model of Metcalfe et al. (2001). In addition, these -band redshift distributions were used to form predictions for the number counts over the Northern and Southern 2dFGRS areas respectively. This was done by multiplying the luminosity function parameter (which governs the model normalisation) used in the Metcalfe et al. (2001) model by the relative density observed in the -band as a function of redshift. These ‘variable models’ were then compared with 2MASS counts extracted for the 2dFGRS areas in order to determine whether the observed counts were consistent with being due solely to features in the local galaxy distribution; the variable models were in good agreement with the number counts, indicating that low redshift luminosity evolution is unlikely to have a significant impact on the observed deficiency in the counts, in the -band at least. In this paper we aim to address the issue of low, bright number counts in the near infrared -band. In particular we wish to address a drawback to the -band analysis of Frith et al. (2005a) - the issue of the number count model normalisation; while the -band model used was compared with faint data and was found to be in good agreement, the level to which systematic effects, arising perhaps via zeropoint offsets between the bright and faint data or cosmic variance in the faint data, might affect the conclusions were uncertain. We address this issue in the -band using new faint data covering 0.3 deg to , calibrated to match the 2MASS zeropoint. In section 2, we first verify that the -band counts provide number counts over the APM survey area which are consistent with the previous results in the and -bands (Busswell et al., 2004; Frith et al., 2005a), and that the form of the counts is not significantly affected by low redshift luminosity evolution through comparisons with the variable models described above. In section 3, we provide details of the data reduction of the new faint -band photometry. The associated counts are presented in section 4. In section 5 we discuss possible systematics affecting the bright number counts including the model normalisation and incompleteness. The conclusions follow in sections 6. 2 Bright H-band counts from 2MASS We wish to examine the form of bright number counts in the -band in order to verify that the counts over the APM survey area (4000 deg around the southern galactic pole) are comparable to those measured previously in the optical -band and near infrared -band (Busswell et al., 2004; Frith et al., 2005a). The near infrared has the advantage of being sensitive to the underlying stellar mass and is much less affected by recent star formation history than optical wavelengths. For this reason, number count predictions in the near infrared are insensitive to the evolutionary model at bright magnitudes. In Fig. 1 we show faint -band data collated from the literature along with bright counts extracted from 2MASS over 27000 deg. The 2MASS magnitudes are determined via the 2MASS -band extrapolated magnitude; this form of magnitude estimator has previously been shown to be an excellent estimate of the total flux in the -band (Frith et al., 2005b, c) through comparison with the total magnitude estimates of Jones et al. (2004) and the -band photometry of Loveday (2000). Throughout this paper we use 2MASS -band counts determined via this magnitude estimator. We also show two models in Fig. 1 corresponding to homogeneous predictions assuming no evolution and pure luminosity evolution models. These are constructed from the -band luminosity function parameters listed in Metcalfe et al. (2005) and the -corrections of Bruzual & Charlot (1993). At bright magnitudes the two are indistinguishable; only at 18 do the model predictions begin to separate. The faint data is in good agreement with both the no evolution and pure luminosity evolution predictions to 26. Before examining the -band counts over the APM survey area, we first verify that the bright counts are consistent with relatively insignificant levels of low redshift luminosity evolution in the manner carried out by Frith et al. (2005a) for the -band counts. In the upper panels of Fig. 2 we show -band to the 2MASS limiting magnitude of , determined through matched 2MASS and 2dFGRS galaxies over the 2dFGRS Northern (left hand) and Southern (right hand panels) declination strips (see Frith et al. (2005a) for further details of the matching technique). The solid lines indicate the expected homogeneous distribution constructed from the pure luminosity evolution predictions of Metcalfe et al. (2005) (there is no discernible difference between this and the no evolution prediction). In the lower panels we divide through by this prediction; these panels show the relative density as a function of redshift. The observed are consistent with the expected trends, with relatively homogeneous distributions beyond (1 per cent and 8 per cent over-dense in the North and South respectively for ). For this reason, Fig. 2 suggests that the level of luminosity evolution is relatively insignificant at low redshifts in the -band; strong luminosity evolution produces an extended tail in the predicted which is not observed in the data. As a further check against strong low redshift luminosity evolution, we can use the observed to predict the expected -band number counts over the 2dFGRS declination strips. This technique is described in detail in Frith et al. (2003, 2005a). To recap, we use the observed density (Fig. 2, lower panels), to vary the luminosity function normalisation () used in the Metcalfe et al. (2005) model as a function of redshift (for ). We show these ‘variable models’ along with the 2MASS -band counts extracted for the 2dFGRS strips in Fig. 3. In each case, the upper panels indicate the number count on a logarithmic scale; in the lower panels we divide through by the homogeneous prediction. In both the Northern and Southern 2dFGRS areas, the counts are in good agreement with the expected trend, defined by the corresponding variable model. This indicates that real features in the local galaxy distribution are the dominant factor in the form of the observed -band number counts, and that strong low redshift luminosity evolution is unlikely to have a significant role in any under-density observed in the APM survey area. We are now in a position to examine the number counts over the APM survey area. In Fig. 4 we show counts extracted for the 4000 deg field along with the homogeneous and the Northern and Southern 2dFGRS variable models shown in Fig. 2. The form of the counts is in good agreement with the (Busswell et al., 2004) and -band (Frith et al., 2005a) bright number counts measured over the APM survey area, with a deficiency of 25 per cent below . In addition, the form of the counts is similar to that of the counts extracted from the 2dFGRS Southern declination strip and the corresponding variable model (this is also observed in the and -band); this perhaps indicates that the form of the local galaxy distribution in the 600 deg 2dFGRS Southern declination strip is similar to that of the much larger APM survey area, with an under-density of 25 per cent to 0.1. However, the 2MASS -band counts over almost the entire survey (20, 27000 deg) are also deficient (as are the -band counts), with a relatively constant deficit of 15-20 per cent to (Fig 3, right hand panels). The low 20 counts raise the question as to whether systematic effects are significant, or whether these counts are due to real features in the local galaxy distribution, as suggested by the agreement between the variable models and corresponding counts in Fig. 3. If the latter is true, then the size of the local hole would not only be much larger than previously suggested but would also represent an even more significant departure from the form of clustering at large scales expected in a CDM cosmology. In the following two sections we address a possible source of systematic error using new faint -band photometry - the model normalisation. Other possible causes for the low counts are also discussed in section 5. 3 New faint H-band data 3.1 Observations & data reduction Our data were taken during a three night observing run in September 2004 at the f/3.5 prime focus of the Calar Alto 3.5m telescope in the Sierra de Los Filabres in Andalucia, southern Spain. The -2000 infra-red camera contains a pixel HAWAII-2 Rockwell detector array, with 18.5m pixels, giving a scale of 0.45”/pixel at the prime focus. All observations were taken with the -band filter. Poor weather meant that only just over one night’s worth of data were usable, and even then conditions were not photometric. Our primary objective was to image the William Herschel Deep Field as deeply as possible (the results of which are presented in a forthcoming paper), but time was available at the start of each night to image several ’random’ fields for 15 minutes each. These were composed of individual 3 second exposures, stacked in batches of 10 before readout. A dithering pattern on the sky with a shift up to 25” around the nominal centre was adopted. Data reduction was complicated by the fact that both the dome and twilight sky flat fields appeared to have a complicated out-of-focus pattern of the optical train imprinted upon them (probably an image of the top end of the telescope). This appeared (in reverse) in the science data if these frames where used for flat-fielding. We therefore constructed a master flat field by medianing together all the science frames from a particular night. This was then used to flat-field all the data. Then, individual running medians were constructed from batches of 10 or so temporally adjacent frames, and these were subtracted from each frame to produce a flat, background subtracted image. These were then aligned and stacked together (with sigma clipping to remove hot pixels). 3.2 Calibration Photometric calibration of the -band images is obtained through comparison with the 2MASS point source catalogue. Fig. 5 shows the 2MASS magnitudes compared with our data for 393 matched point sources over the Calar Alto field and the William Herschel Deep Field. The zeropoint of our data is chosen to match that of the 2MASS objects and is accurate to 0.01 magnitudes. The large datapoints and errorbars indicate the mean offset and dispersion as a function of magnitude. When comparing this data to the 2MASS number counts at bright magnitudes it is important to note that the 2MASS point source catalogue includes a maximum bias in the photometric zeropoint of 2 per cent around the sky (see the 2MASS website). 3.3 Star/Galaxy separation We use the Sextractor software to separate objects below ; for this magnitude limit, the associated STAR_CLASS parameter provides a reliable indicator of stars and galaxies. We identify 30.0 per cent as galaxies (CLASS_STAR0.1), 58.9 per cent as stars (CLASS_STAR0.9), leaving 11.1 per cent as unclassified. 4 Faint H-band counts 4.1 Comparison with the LCIRS Before determining number counts for the new -band data described in section 3, we first examine the photometry of the Las Campanas Infra-Red Survey (LCIRS; Chen et al., 2002). The published data covers 847 arcmin in the Hubble Deep Field South (HDFS) and 561 arcmin in the Chandra Deep Field South (CDFS); the combined solid angle (0.39 deg) represents the largest -band dataset for 1420. The associated number counts are 15 per cent below the homogeneous Metcalfe et al. (2005) predictions at (see Fig. 1). This is significant, as if the model normalisation was altered to fit, the deficiency in the 2MASS counts at bright magnitudes (Fig. 3) would become much less severe. However, various other surveys show higher counts, although over much smaller solid angles. With the LCIRS data in particular therefore, it is vital to ensure that the photometric zeropoint is consistent with the 2MASS data at bright magnitudes. In Fig. 6 we compare the LCIRS and 2MASS -band photometry for 438 points sources matched over the HDFS and CDFS fields. There appears to be a large offset which is approximately constant for 12. Using point sources matched at all magnitudes, we determine a mean offset of -0.280.01 magnitudes; this is robust to changes in the magnitude range and is consistent over both the HDFS and CDFS fields. 4.2 New H-band counts In Fig. 7 we show counts determined for the new -band data described in section 3, the 0.27 deg CA field and the 0.06 deg WHDF (see also table 1). Both sets of counts are in excellent agreement with the pure luminosity evolution and no evolution homogeneous predictions of Metcalfe et al. (2005). In addition we show LCIRS counts determined in the 0.24 deg HDFS and 0.16 deg CDFS, applying the 0.28 magnitude zeropoint offset determined with respect to 2MASS in section 4.1. The associated counts are also in excellent agreement with the Metcalfe et al. (2005) models at all magnitudes. In Fig. 8, we show counts determined from our data and the LCIRS combined, with a consistent zeropoint applied as in Fig. 7. We estimate the uncertainty arising from cosmic variance using field-to-field errors, weighted by the solid angle of each field. These combined counts are in good agreement with the Metcalfe et al. (2005) models, particularly at fainter magnitudes where the dispersion in the counts arising from cosmic variance appears to be small. We perform least squares fits between these counts and the pure luminosity evolution model; in the magnitude range 1418 we find a best fit normalisation of 1.095, where 1.0 corresponds to the Metcalfe et al. (2005) normalisation shown in Fig. 8. Varying the fitting range does slightly alter the result; in the range 1618 we find a best fit normalisation of 1.061 for example. 5 Discussion In the previous sections, bright -band number counts from 2MASS were determined over the APM survey area ( deg) and almost the entire 66 per cent of the sky (20, 27000 deg), along with faint counts to over a combined solid angle of 0.69 deg applying a zeropoint consistent with 2MASS. The bright -band number counts over the APM survey area are extremely low ( per cent at ) with respect to homogeneous predictions, and reproduce the form of the bright counts observed in the optical -band (Busswell et al., 2004) and the near infrared -band (Frith et al., 2005a). Previous work has suggested that, if due solely to local large-scale structure, these low counts would be at odds with the form of clustering expected in a CDM cosmology. In addition, the bright -band 20 counts were also found to be low. In the following section, various possible causes for these low counts are examined. 5.1 Model normalisation The normalisation of number count models may be determined by fixing the predicted to the observed number of galaxies at faint magnitudes. The magnitude range at which this is done should be bright enough to avoid large uncertainties in the evolutionary model while faint enough such that large fluctuations in the counts arising from cosmic variance are expected to be small. Near infrared wavelengths are expected to be insensitive to luminosity evolution at bright magnitudes, making the -band particularly useful for such analysis. Of vital importance when determining the model normalisation is that when making comparisons between faint and bright counts, the zeropoints are consistent; an offset of a few tenths of a magnitude between the two, for example, would be enough to remove the observed anomaly in the bright counts over the APM survey area. Applying the 2MASS zeropoint to the faint -band data presented in this work and the LCIRS data (Chen et al., 2002), covering a combined solid angle of 0.69 deg, it is clear that a discrepancy between the bright and faint counts exists; the model normalisation used previously, which indicates low counts below over the APM survey area (and for 20), provides good agreement with the faint data. In fact, fixing the model to the faint counts implies a slightly higher normalisation. This agreement, as indicated by the errorbars in Fig. 8, suggests that the discrepancy between the bright and faint counts is not due to cosmic variance in the faint data. To remove the observed deficit in the APM survey area counts below by renormalising the model, requires a deviation from the faint counts of 7.0 using the best fit normalisation of 1.095 (determined for ). Similarly, renormalising to the 20 counts would require a deviation of 7.2 from the faint data. In addition, the model normalisation may also be scrutinised through comparison with redshift distributions. Fig. 2 shows the Metcalfe et al. (2005) pure luminosity evolution model compared with -band determined through a match between 2MASS and the 2dFGRS Northern and Southern declination strips. The model predictions appear to be consistent with the observations, with relatively homogeneous distributions beyond (1 per cent and 8 per cent over-dense in the North and South respectively). Lowering the model normalisation to fit the bright 2MASS number counts would compromise this agreement and imply large over-densities beyond (19 per cent and 27 per cent in the North and South respectively). 5.2 Galaxy evolution A change in amplitude therefore, cannot easily account for the discrepancy in the number counts at bright magnitudes. However, could an unexpected change in the slope of the number count model contribute? In section 2, we examined the consistency of the number counts at bright magnitudes with the underlying redshift distribution, assuming a model with insignificant levels of luminosity evolution at low redshift. The predictions derived from the observed were in good agreement with the observed number counts indicating that luminosity evolution at low redshift is unlikely to have a significant impact on the form of the counts at bright magnitudes. This is supported by the consistency of the pure luminosity evolution model with the observed redshift distributions (Fig. 2); strong low redshift luminosity evolution produces a tail in the which would imply large deficiencies at high redshift. Could unexpectedly high levels of luminosity evolution at higher redshifts affect our interpretation of the bright counts? If the slope of the homogeneous prediction were to increase significantly above from the evolutionary models considered in this paper, then the model normalisation could effectively be lowered into agreement with the bright counts. The problem with this is that the number counts beyond are consistent with low levels of luminosity evolution to extremely faint magnitudes (). Models with significantly higher levels of luminosity evolution above would therefore compromise this agreement. Therefore, it appears that relatively low levels of luminosity evolution are consistent with number count observations to high redshifts. Also, recent evidence from the COMBO-17 survey, examining the evolution of early-type galaxies using nearly 5000 objects to (Bell et al., 2004), suggests that density evolution will also not contribute; appears to decrease with redshift indicating that the number of objects on the red sequence increases with time, and so acts contrary to the low counts observed at bright magnitudes. This picture is supported by the K20 survey (Cimatti et al., 2002), which includes redshifts for 480 galaxies to a mean depth of and a magnitude limit of with high completeness. The resulting redshift distribution is consistent with low levels of luminosity and density evolution (Metcalfe et al., 2005). In summary, significant levels of evolution are not expected in passive or star forming pure luminosity evolution models, although could occur through dynamical evolution. However, the pure luinosity evolution models of Metcalfe et al. (2005) fit the observed at ; it is at lower redshifts that there are fluctuations. In addition, these models continue to fit the observed at very high redshift and the number counts to extremely faint magnitudes (), suggesting that there is little need for evolution at , far less 0.1. Some combination of dynamical and luminosity evolution might be able to account for these observations; however it would require fine-tuning in order to fit both the steep counts at bright magnitudes and the unevolved at low and high redshifts. 5.3 Photometry issues & completeness The number counts shown in Figs. 7 and 8 show bright and faint counts with a consistent zeropoint applied. Photometry comparisons have been made using several hundred point sources matched at bright magnitudes. In order to check that the applied zeropoints are consistent with the galaxy samples, we also compare the 2MASS photometry with 24 matched galaxies in the CA field and WHDF and 16 in the LCIRS samples; we find that the mean offsets are 0.04 and 0.06, consistent with the zeropoints determined via the 2MASS point sources. The comparisons with the 2MASS point source catalogue (Figs. 5 and 6) also indicate that there is no evidence of scale error in either of the faint samples to . Could the discrepancy between the bright and faint counts arise from an under-estimation of the total flux of the galaxies? Recall that we make no correction to total magnitude for the faint data presented in this work; however, under-estimating the total flux in the faint data would only increase the observed deficit in the counts at bright magnitudes, if the model normalisation is adjusted to fit the faint counts. The good agreement between the point source and galaxy zeropoints suggests that the estimate for the total galaxy flux is comparable in the bright and faint data. At bright magnitudes, the 2MASS extrapolated -band magnitudes are used. In the -band, this magnitude estimator has been shown to be an excellent estimate of the total flux, through comparisons with the total -band magnitude estimator of Jones et al. (2004) and the -band photometry of Loveday (2000). Another possible contribution to the low counts could be high levels of incompleteness in the 2MASS survey. As with the possible systematic effects described previously, it is differing levels of completeness in the faint and bright data which would be important. The 2MASS literature quotes the extended source catalogue completeness as  per cent (see the 2MASS website for example). Independently, Bell et al. (2003) suggest that the level of completeness is high (99 per cent), determined via comparisons with the SDSS Early Data Release spectroscopic data and the 2dFGRS. The faint data presented in this work and the LCIRS data are likely to suffer less from incompleteness, as we cut well below the magnitude limit, are subject to lower levels of stellar confusion and suffer less from low resolution effects. Incompleteness in 2MASS will therefore affect the observed deficit in the bright counts at the  per cent level, although the effect is likely to be at the low end of this constraint due to incompleteness in the faint catalogues and suggestions that the 2MASS extended source catalogue is fairly complete. 5.4 Large-scale structure It appears therefore, that the observed deficiency in the bright counts might be significantly affected by incompleteness in the 2MASS extended source catalogue. However, the level to which other systematic effects such as the model normalisation, luminosity evolution and photometry issues appears to be small. The question then is accounting for these various sources of error or uncertainty, are the deficiencies in the bright -band counts over the APM survey area and for still at odds with the expected fluctuations in the counts arising from local large-scale structure in a CDM cosmology, as suggested in previous work (Busswell et al., 2004; Frith et al., 2005a)? We determine the expected fluctuations in the bright number counts due to cosmic variance via CDM mock 2MASS catalogues; these are described in detail in Frith et al. (2005a). To recap, we apply the 2MASS selection function to 27 virtually independent volumes of Mpc formed from the 3000Mpc CDM Hubble Volume simulation. This simulation has input parameters of , , and (Jenkins et al., 1998). The mean number density of the counts at the magnitude limit is set to that of the observed 2MASS density. We are now in a position to estimate the significance of the observed bright -band counts. We use the 1 fluctuation in the counts expected in a CDM cosmology (determined using the 2MASS mocks described above), which for the APM survey area is 7.63 per cent (for ) and 4.79 per cent (for ), and for is 3.25 per cent (for ) and 1.90 per cent (for ). In addition we also take into account the uncertainty in the model normalisation; we use the best fit normalisation of the Metcalfe et al. (2005) pure luminosity evolution model (a factor of 1.095 above the Metcalfe et al. (2005) model) and add the uncertainty of 3.1 per cent derived from the faint -band counts (presented in Fig. 8) in quadrature. Regarding the possible effect arising from survey incompleteness, we first assume that the level of incompleteness is comparable in the faint and bright data; the resulting significance for the APM survey area and bright counts are shown in column 3 of table 2. This represents an upper limit on the significance since we have effectively assumed that there is no difference in the incompleteness between the bright and faint datasets. In column 4 of table 2, we assume that there is a difference in the completeness levels in the faint and bright data of 10 per cent. This represents a lower limit on the significance (assuming that there are no further significant systematic effects), since we assume that the completeness of the 2MASS extended source catalogue is 90 per cent (the lower limit) and that there is no incompleteness in the faint data. Therefore, assuming a CDM cosmology, it appears that the observed counts over the APM survey area might be in line with a rare fluctuation in the local galaxy distribution. However, the counts over 66 per cent of the sky () suggest a deficiency in the counts that are at odds with CDM, even accounting for a 10 per cent incompleteness effect and the measured uncertainty in the best fit model normalisation. 6 Conclusions We have presented new -band photometry over two fields with a combined solid angle of 0.30 deg to 19. The zeropoint is chosen to match that of the 2MASS photometry at the bright end and is accurate to 0.01 magnitudes. In addition we have examined the faint -band data of the LCIRS (Chen et al., 2002) which covers two fields with a combined solid angle of 0.39 deg to 20. The zeropoint of this data appears to be offset from the 2MASS photometry by 0.280.01 magnitudes. Applying a consistent zeropoint, the faint counts determined from the new data presented in this work and the LCIRS are in good agreement with the pure luminosity evolution model of Metcalfe et al. (2005), with a best fit normalisation a factor of 1.095 higher. In contrast, the bright -band counts extracted from 2MASS over the 4000 deg APM survey area around the southern galactic pole are low with respect to this model, corroborating previous results over this area in the optical -band and near infrared -band (Busswell et al., 2004; Frith et al., 2005a). In addition, the counts extracted for almost the entire survey, covering 66 per cent of ths sky, are also low with a deficit of  per cent to . Importantly, this discrepancy does not appear to be due to zeropoint differences between the faint and bright data or uncertainty in the model normalisation set by the faint counts. We have investigated various possible sources of systematic error which might affect this result: The counts are consistent with low levels of luminosity and density evolution, as predicted by the pure luminosity evolution model of Metcalfe et al. (2005), to extremely faint magnitudes (see Fig. 1). Also, the photometry appears to be consistent between the faint and bright data using a zeropoint applied via comparisons between point sources. However, differing incompleteness in the bright and faint galaxy samples might have a significant impact; completeness in the 2MASS extended source catalogue is  per cent. Finally, we determined the expected cosmic variance in the bright number counts from CDM mock 2MASS catalogues. Allowing for the model normalisation uncertainty determined from the faint counts, and using an upper limit on the incompleteness in the 2MASS galaxy sample, the deficiency in the counts over the APM survey area represent a rare (1 in 100) fluctuation in a CDM cosmology. However, the low -band counts for suggest that this deficiency might extend over the entire local galaxy distribution; allowing for incompleteness and the model normalisation uncertainty as before, this would represent a 4 fluctuation (1 in 10000) in the local galaxy distribution, and would therefore be at odds with the expected form of clustering expected in a CDM cosmology on large scales. The increase in faint near infrared data from the UK Infrared Deep Sky Survey (UKIDSS) should help to resolve this issue. Acknowledgements The new data presented in this paper were based on observations collected at the Centro Astronòmico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofíca de Andalucía (CISC). This publication also makes use of data products from the 2 Micron All-Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Centre/California Institute of Technology, funded by the Aeronautics and Space Administration and the National Science Foundation. We thank Peter Draper for his help with Sextractor. We also thank the wonderful Phil Outram and John Lucey for useful discussion and Nicholas Ross for assistance with the Calar Alto field selection. References • Bell et al. (2003) Bell, E.F., McIntosh, D.H., Katz, N. & Weinberg, M.D. 2003, ApJS, 149, 289 • Bell et al. (2004) Bell, E.F. et al. 2004, ApJ, 608, 752 • Broadhurst et al. (1988) Broadhurst, T.J. Ellis, R.S. & Shanks, T. 1988, MNRAS, 235, 827 • Bruzual & Charlot (1993) Bruzual, A.G., & Charlot, S. 1993, ApJ, 405, 538 • Busswell et al. (2004) Busswell, G.S., Shanks, T., Outram, P.J., Frith, W.J., Metcalfe, N. & Fong, R. 2004, MNRAS, 354, 991 • Cimatti et al. (2002) Cimatti, A. et al. 2002, A&A, 391, L68 • Colless et al. (1990) Colless, M.M. Ellis, R.S., Taylor, K. & Hook, R.N. 1990, MNRAS, 244, 408 • Colless et al. (2001) Colless, S. et al. 2001, astro-ph/0306581 • Chen et al. (2002) Chen, H.-S. et al. 2002, ApJ, 570, 54 • Cole et al. (2005) Cole, S.M. et al. 2005, accepted by MNRAS, astro-ph/0501174 • Driver (2003) Driver, S. 2003, IAUS, 216, 97 • Frith et al. (2003) Frith, W.J., Busswell, G.S., Fong, R., Metcalfe, N. & Shanks, T. 2003, MNRAS, 345, 1049 • Frith et al. (2004) Frith, W.J., Outram, P.J. & Shanks, T. 2004, ASP Conf. Proc., Volume 329, 49 • Frith et al. (2005a) Frith, W.J., Shanks, T. & Outram, P.J. 2005a, MNRAS, 361, 701 • Frith et al. (2005b) Frith, W.J., Outram, P.J. & Shanks, T. 2005b, submitted to MNRAS, astro-ph/0507215 • Frith et al. (2005c) Frith, W.J., Outram, P.J. & Shanks, T. 2005c, submitted to MNRAS, astro-ph/0507704 • Hawkins et al. (2003) Hawkins, E. et al. 2003, MNRAS, 346, 78 • Jarrett (2004) Jarrett, T.H. 2004, astro-ph/0405069 • Jenkins et al. (1998) Jenkins, A. et al. 1998, ApJ, 499, 20 • Jones et al. (2004) Heath, D.H. et al. 2004, MNRAS, 355, 747 • Loveday et al. (1992) Loveday, J., Peterson, B.A., Efstathiou, G. & Maddox, S.J. 1992, ApJ, 390, 338 • Loveday (2000) Loveday, J. 2000, MNRAS, 312, 517 • Loveday (2004) Loveday, J. 2004, MNRAS, 347, 601L • Maddox et al. (1990a) Maddox, S.J., Sutherland, W.J., Efstathiou, G. & Loveday, J. 1990a, MNRAS, 243, 692 • Maddox et al. (1990b) Maddox, S.J., Sutherland, W.J., Efstathiou, G., Loveday, J. & Peterson 1990b, MNRAS, 247, 1 • Martini (2001) Martini, P. 2001, AJ, 121, 598 • McCracken et al. (2000) McCracken, H.J., Metcalfe, N., Shanks, T., Campos, A., Gardner, J.P. & Fong, R. 2000, MNRAS, 311, 707 • Metcalfe et al. (1995) Metcalfe, N., Fong, R. & Shanks, T. 1995, MNRAS, 274, 769 • Metcalfe et al. (2001) Metcalfe, N., Shanks, T., Campos, A., McCracken, H.J. & Fong, R. 2001, MNRAS, 323, 795 • Metcalfe et al. (2005) Metcalfe, N., Shanks, T., Weilbacher, P.M., McCracken, H.J., Campos, A., Fong, R. & Thompson, D. 2005, in prep. • Moy et al. (2003) Moy, E., Barmby, P., Rigopoulou, D., Huang, J.-S., Willner, S.P. & Fazio, G.G. 2003, A&A, 403, 493 • Shanks (1990) Shanks, T. 1990, IAUS, 139, 269 • Teplitz et al. (1998) Teplitz, H.I., Malkan, M. & McLean, I.S. 1998, ApJ, 506, 519 • Thompson et al. (1999) Thompson, R., Storrie-Lombardi, L. & Weymann, R. 1999, AJ, 117, 17 • Yan et al. (1998) Yan, L., McCarthy, P., Storrie-Lombardi, L. & Weymann, R. 1998, ApJ, 503, L19 • Yasuda et al. (2001) Yasuda, N. et al. 2001, AJ, 122, 1104
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316970109939575, "perplexity": 2358.096072789484}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00121.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/28677/why-cant-we-implement-fault-tolerant-single-qubit-rotations-continuously
# Why can't we implement fault-tolerant single qubit rotations continuously? My understanding is that in order to implement single-qubit rotations on logical fault-tolerant qubits one has to do smth like discretizing rotations and using $$T$$ gates. So, is it the case that given perfect single-qubit rotations of physical qubits (say, for a classical statevector simulator used to model those), one would still have to use discretized rotations for logical qubits? What would be the intuitive explanation? There are at least two reasons why we want/need to discretize rotations. First reason: in an ideal world without errors, performing an arbitrary single qubit rotation at the logical level requires interaction between the physical qubits composing the logical one. As the physical gateset usually does not contain an arbitrary $$n$$-qubit physical gate, we cannot implement a single qubit rotation of an arbitrary angle "in one step". When we are doing quantum error-correction the quantum state of the qubit at the abstract "logical level", is encoded in an entangled state composed of a given number of physical qubits. Conceptually, to perform a single qubit logical gate, we would "just" have to apply some big unitary acting upon all these physical qubits. "In general", this unitary will typically make all these physical qubits interact (if you tried to find what operation you need to apply on the physical qubits encoding your logical qubit, this is what the maths would tell you). Hence, a single qubit rotation at the logical level requires "in general" to make interactions between physical qubits. Now, assuming we can do an arbitrary single qubit rotation at the physical level, we are however typically restricted in the two-qubit (or more) interactions we can do on the quantum hardware (for instance we can do a cNOT but not an arbitrary two-qubit gate physically). For this reason, we would not be able to implement exactly the arbitrary single logical qubit rotation desired easily, even if we can do arbitrary single physical qubit rotations (if you want your logical gate to be composed of "not too many" physical gates, then you cannot implement an arbitrary rotation angle at the logical level). This first part of the explanation is here to give you some intuition of why it is harder to perform single qubit rotation at the logical level. Second reason: additionnally, this ideal world does not exist. Because errors propagate we are even more restricted on the logical operations we are allowed to do. What I explained so far assumes perfect qubits. Because the hardware is noisy, we have another constraint: we should avoid at all cost to make different physical qubit within a logical qubit interact with each other. This is somewhat contradictory with what I said just before (but I was explaining the ideal noiseless case). The reason why two physical qubit inside the same logical qubit should not interact between each other is that if it occurs, a single error (for instance a bit-flip) on a physical qubit inside the logical qubit could propagate toward all the physical qubits composing the logical qubit during the execution of the gate. This is bad because error-correcting codes are able to detect and correct error only if a "low" number of physical qubits have been affected by errors. In practice, a code of distance $$d=2t+1$$ can correct up to $$t$$ errors, hence if more than $$t$$ physical qubits have been affected by errors your error-correcting code will in general not be able to fix it. Imagine that a single physical qubit had an error but that during the implementation of the logical gate you perform cNOT between all the physical qubit inside your logical qubit. Then all your physical qubits would be affected by this error and your error-correcting code cannot protect you anymore. This last requirement of "not making the physical qubit interact with each other" is called transversality (see the answers there). A single qubit logical gate that is transversal will be written as $$G_L = \bigotimes_{i=1}^k G_k$$, where $$k$$ is the number of physical qubit in the logical qubit and $$G_k$$ is a single qubit physical gate acting on the $$k$$'th physical qubit of your logical qubit (by definition the physical qubits in the logical qubit will not interact with such a logical gate). In practice, not all logical gates can be done transversally (the Eastin-Knill theorem forbids it). What we usually have is that some logical gates can be done transversally (often the Clifford gates), and for the rest we use magic-state injection (I won't go in the detail for that, but basically you can prepare a logical ancilla and make it interact with your logical qubit which allows you to do a non-Clifford gate, for instance the $$T \equiv e^{-i \pi/8 Z}$$ gate). Performing Clifford gates + one non Clifford gates allows you to reach universal quantum computing. However, your algorithm will be decomposed in this example on Clifford+T: regarding your original question, it means that you will have to approximate the exact single qubit rotation you want to do. I hope it gives the rough idea.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229973912239075, "perplexity": 607.1603862256303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00160.warc.gz"}
https://www.freemathhelp.com/forum/threads/z-nz-equations-i-have-to-prove-that-these-are-bijective-in-z-77z-and-give-their-reciprocal-functions.116657/
# Z/nZ equations: "I have to prove that these are bijective in Z/77Z and give their reciprocal functions." #### Mavil ##### New member Joined Jun 14, 2019 Messages 3 Hello ppl, this is my first time here. I have a question. I have to prove that these are bijective in Z/77Z and give their reciprocal functions. 1) f(x)=38x+2 2) f(x)=33x+2 3) x30 Ok so i am not trying to get the answer here without sweating for it. Warning though, i am terrible at maths. I took case 1. f is a bijection if f is both injective and surjective. I started by proving that f is a injection. For that i prove that f(a)=f(b) f(a)=f(b) 38a+2=38b+2 38(a-b)=0 a=b if a-b=0 so i'm trying to find 38's inverse in Z/77Z (-2)*38=-76 so the inverse is (-2) -2(a-b)=0 => a-b=0 => a=b So f is an injection Now i want to prove that f is a surjection y=38x+2 y-2=38x I'm stuck here, what should i try now. #### HallsofIvy ##### Elite Member Joined Jan 27, 2012 Messages 5,285 Where you have "-2(a- b)= 0" you should have (-2)(38)(a- b)= 0. You don't really need to do that. 77= 7(11) so the only "0 divisors" are 7 and 11. 38 has neither as a divisor so has an inverse. You don't actually have to find it. Just knowing that an inverse exists is sufficient. From y- 3= 38x, x is y- 3 times the inverse of 38. Again, just knowing that an inverse exists (since 38 has neither 7 nor 11 as a factor) is sufficient to tell you that x is unique. #### Mavil ##### New member Joined Jun 14, 2019 Messages 3 Wait so, going by your method, then for the 2nd f(x) since 33 has 11 as a "0 divisor" that means that it doesn't have an inverse so f is not injective and therefore not bijective? #### HallsofIvy ##### Elite Member Joined Jan 27, 2012 Messages 5,285 Yes, 77 is 7 times 11 so adding 7 to a solution of 33x= 2 gives another solution. If x= 0, y= 2. If x= 7, y= 233= 3(77)+ 2= 2 (mod 77). If x= 14, y= 464= 6(77)+ 2= 2 (mod 77). If x= 21, y= 695= 9(77)+ 2= 2 (mod 77). etc. Last edited: #### Mavil ##### New member Joined Jun 14, 2019 Messages 3 I see but since i can show that 2) is not injective i don't even have to prove the surjection since f can't be bijective if its not injective in the first place. What about the 3rd though, is there some kind of obvious hint i'm missing. a^30=b^30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700450658798218, "perplexity": 1598.3495384061134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00385.warc.gz"}
http://mathoverflow.net/questions/15589/self-adjoint-extension-of-locally-defined-differential-operators/15618
# Self-adjoint extension of locally defined differential operators The following is well known. Given a symmetric differential operator, like $\partial_x^2$, defined on smooth functions of compact support on $\mathbb{R}$, $C_0^\infty(\mathbb{R})$, one can count the number of independent $L^2$-normalizable solutions of $\partial_x\pm i$ and use the von Neumann index theorem to classify possible self-adjoint extensions of this operator on $L^2(\mathbb{R})$. This can be generalized to more complicated differential operators, to $\mathbb{R}^n$ as well as bounded open subsets thereof. On the other hand, suppose that I have a manifold $M$ that is covered by a set of open charts $U_i$ with differential operators $D_i$ defined in corresponding local coordinates. It is easy to check if the $D_i$ are restrictions of a globally defined differential operator $D$ on $M$: the transition functions on intersections of charts $U_i\cap U_j$ must transform $D_i$ into $D_j$ and vice versa. Suppose that is the case and that I am interested in self-adjoint extensions of $D$ to $L^2(M)$ (supposing that an integration measure is given and that $D$ is symmetric with respect to it). Now, the question: Is there way of classifying the self-adjoint extensions of $D$ on $L^2(M)$ in terms of its definition in local coordinates, the actions of $D_i$ on $C_0^\infty(U_i)$. A simple example would be the cover of $S^1$ by two overlapping charts. I know that a self-adjoint extension of $\partial_x^2$ on $[0,1]$ with periodic boundary conditions gives the naturally defined self-adjoint Laplacian on $S^1$. Then $(0,1)$ is interpreted as a chart on $S^1$ that excludes one point. However, I don't know how to define the self-adjoint Laplacian on $S^1$ if it's given on two overlapping charts. - In Shubin' papers, there is an opposite, that what he has said Mazzeo on If M has boundary or is otherwise noncompact or singular, symmetric elliptic operators are usually not essentially self-adjoint. It is enough to take M with a bounded geometry. Mohammed ([email protected]) –  user35664 Jun 25 '13 at 20:53 (a) Which papers of Shubin (b) can you please re-write "that what he has said Mazzeo on if $M$...", it is really hard to read as written (c) this probably should be just a comment, instead of an answer –  Willie Wong Jun 26 '13 at 10:24 @Willie Wong, Mazzeo has said " If $M$ has boundary or is otherwise noncompact or singular, symmetric elliptic operators are usually not essentially self-adjoint." –  user35664 Jun 26 '13 at 20:35 Regarding the Shubin's Papers see, e.g., Discreteness of spectrum for the Schrödinger operators on manifolds of bounded geometry" Published in The Maz’ya Anniversary Collection Operator Theory: Advances and Applications Volume 110, 1999, pp 185-226 Mohammed [email protected] –  user35664 Jun 26 '13 at 20:39 Yes, if the manifold is compact without boundary, and the operator $D$ is elliptic (equivalently, if each of the $D_i$ are elliptic), and if $D$ is symmetric with respect to some smooth positive density, then $D$ is essentially self-adjoint, i.e. extends uniquely from the core domain $\mathcal C^\infty$ to an unbounded self-adjoint operator on $L^2$ of the manifold $M$. The way to prove this is as follows. If $D$ has order $m > 0$, then the maximal domain of $D$ is equal to $H^m(M)$; this is proved by (local!) elliptic regularity. On the other hand, every element in this maximal domain is approximable in the graph norm by smooth functions, again this can be reduced to a local calculation in each chart. Hence the minimal and maximal domains agree (for more on this, see Reed & Simon, Volume 2, I think). If $M$ has boundary or is otherwise noncompact or singular, symmetric elliptic operators are usually not essentially self-adjoint. Rafe, thanks for the reply. I can certainly see how that a locally defined, elliptic operator has a unique self-adjoint extension on a compact, closed manifold $M$. However, I'm also interested in the case where $M$ is not compact or has a boundary. For instance, take each $U_i$ diffeomorphic to $(0,1)$ and $D_i=\partial_x^2$. The von Neumann index calculation is the same for each $U_i$. However, I could patch such charts together to make $S^1$, $\mathbb{R}$, or $[0,\infty{[}$ by throwing in $0$. I'm interested in figuring out possible global self-adjoint extensions from the chart patching. –  Igor Khavkine Feb 18 '10 at 11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254071116447449, "perplexity": 232.18960765992432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009515.14/warc/CC-MAIN-20141125155649-00201-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.jstage.jst.go.jp/article/jsiaml/9/0/9_37/_article/-char/en
JSIAM Letters Online ISSN : 1883-0617 Print ISSN : 1883-0609 ISSN-L : 1883-0617 Articles Instability of a free boundary in a Hele-Shaw cell with sink/source and its parameter dependence Hisasi TaniShigetoshi Yazaki Author information JOURNAL FREE ACCESS 2017 Volume 9 Pages 37-40 Details Abstract We carry out a linear stability analysis for a free boundary between two fluids in a Hele-Shaw cell, which is driven by the Darcy's law and an injection or suction at the center. In general, stability of an interface is determined by the well-known Saffman-Taylor instability condition. The linear growth rate of the perturbation depends on two parameters; the rate of the injection/suction and the viscosity contrast of the two fluids. In this paper, we numerically find a parameter region, in which the interface can be stable (resp. unstable) even though it has been considered to be unstable (resp. stable) due to the Saffman-Taylor instability. For the case of the injection, it is suggested that such parameter region vanishes as time increases to infinity. However, the destabilization can be retarded for a sufficiently long time, as one tunes the viscosity and the injection rate of the injecting fluid. Content from these authors © 2017, The Japan Society for Industrial and Applied Mathematics Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204429745674133, "perplexity": 844.996226235116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00074.warc.gz"}
https://www.computer.org/csdl/trans/tc/1976/07/01674676-abs.html
The Community for Technology Leaders Issue No. 07 - July (1976 vol. 25) ISSN: 0018-9340 pp: 673-677 L. Yelowitz , Department of Computer Science, University of Pittsburgh ABSTRACT A concise matrix notation is introduced, leading to a very simple statement of the resolution principle of mechanical theorem proving in the propositional calculus. The refinements of general resolution can also be stated easily using this notation. In addition, the notation has lead to the development of three new techniques of theorem proving which are described and proved complete. INDEX TERMS AND/OR trees, completeness, covering problems, heuristics, mechanical theorem proving, resolution principle. CITATION L. Yelowitz, A. Kandel, "New Results and Techniques in Resolution Theory", IEEE Transactions on Computers, vol. 25, no. , pp. 673-677, July 1976, doi:10.1109/TC.1976.1674676
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931921124458313, "perplexity": 1930.2001159801446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00040.warc.gz"}
http://mathoverflow.net/questions/28997/does-anyone-know-an-intuitive-proof-of-the-birkhoff-ergodic-theorem?sort=votes
# Does anyone know an intuitive proof of the Birkhoff ergodic theorem? For many standard, well-understood theorems the proofs have been streamlined to the point where you just need to understand the proof once and you remember the general idea forever. At this point I have learned three different proofs of the Birkhoff ergodic theorem on three separate occasions and yet I still could probably not explain any of them to a friend or even sit down and recover all of the details. The problem seems to be that they all depend crucially on some frustrating little combinatorial trick, each of which was apparently invented just to service this one result. Has anyone seen a more natural approach that I might actually be able to remember? Note that I'm not necessarily looking for a short proof (those are often the worst offenders) - I'm looking for an argument that will make me feel like I could have invented it if I were given enough time. - Great question - I wish I'd thought to ask it myself! –  Ian Morris Jun 24 '10 at 18:18 I don't know whether this helps or whether you've already seen this before, but this made a lot more intuitive sense to me than the combinatorial approach in Halmos's book. The key point in the proof is to prove the maximal ergodic theorem. This states that if $M_T$ is the maximal operator $M_T f= \sup_{n >0} \frac{1}{n+1} (f + Tf + \dots + T^n f)$, then $\int_{M_T f>0} f \geq 0$. Here $T$ is the associated map on functions coming from the measure-preserving transformation. This is a weak-type inequality, and the fact that one considers the maximal operator isn't terribly surprising given how they arise in a) the proof of the Lebesgue differentation theorem (namely, via the Hardy-Littlewood maximal operator $Mf(x) = \sup_{t >0} \frac{1}{2t} \int_{x-t}^{x+t} |f(r)| dr$. b) In the theory of singular integrals, one can define a maximal operator in the same way and prove that it is $L^p$-bounded for $1 < p < \infty$ and weak-$L^1$ bounded in suitably nice homogeneous cases (e.g. the Hilbert transform). One of the consequences of this is, for instance, that the Hilbert transform can be computed a.e. via the Cauchy principal value of the usual integral. c) I'm pretty sure the boundedness of the maximal operator of the partial sums of Fourier series is used in the proof of the Carleson-Hunt theorem. So using maximal operators (and, in particular, weak bounds on them) to establish convergence is fairly standard. Once the maximal inequality has been established, it isn't usually very hard to get the pointwise convergence result, and the ergodic theorem is no exception. The maximal ergodic theorem actually generalizes to the case where $T$ is an operator of $L^1$-norm at most 1, and thinking of it in a more general sense might meet the criteria of your question. In particular, let $T$ be as just mentioned, and consider $M_T$ described in the analogous way. Or rather, consider $M_T'f = \sup_{n \geq 0} \sum_{i=0}^n T^if$. Clearly $M_T'f >0$ iff $M_Tf >0$. Moreover, $M_T'$ has the crucial property that $T M_T' f + f = M_T' f$ whenever $M_Tf>0$. Therefore, $\int_{M_T'f>0} f = \int_{M_T'f>0} M_T'f - \int_{M_T'f>0} TM_T'f.$ The first part is in fact $||M_T'f||_1$ because the modified maximal operator is always nonnegative. The second part is at most $||T M_T'f|| \leq ||M_T'f||$ by the norm condition. Hence the difference is nonnegative. Perhaps this will be useful: let $M$ be an operator (not necessarily linear) sending functions to nonnegative functions such that $(T-I)Mf = f$ wherever $Mf>0$, for $T$ an operator of $L^1$-norm at most 1). Then $\int_{Mf>0} f \geq 0$. The proof is the same. - This answer is extremely helpful - I was sort of hoping that one of the proofs that I had already encountered could be made to seem more natural in a broader context, and you accomplished exactly that. Thanks! –  Paul Siegel Jul 1 '10 at 9:59 If $f\equiv 1$ and $T=I$ we have $M_T'\equiv+\infty$, so the recurrence formula is $+\infty + f=+\infty$, which is useless. Am I missing something? –  Marcos Cossarini Sep 12 at 19:30 I know of six proofs of the Birkhoff ergodic theorem. • using a maximal inequality (Birkhoff, Riesz, Wiener, Yosida, Kakutani, Garsia...) • based on martingales and upcrossing inequalities (Bishop 1966) • using non-standard analysis (Kamae 1982) • based on variational inequalities (Bourgain 1988) • using a filling scheme (Chacon) • Katznelson-Weiss proof (1982) and derivatives (Keane, Petersen) The last one seems the most easy to remember for me. This may look like a combinatorial trick, but with some thought, it appears quite natural, and I know several results that use some similar idea. So let $\epsilon>0$ and x in X. We can find an $n(x)$ depending on x such that $$\overline{\lim}\ {1\over n}\ \Sigma_0^{n-1}\ f(T^k(x)) \leq \lim {1\over n(x)} \sum_{k=0}^{n_(x)-1} f(T^k(x))\ +\ \varepsilon$$ Note that n(x) is finite everywhere, hence bounded on a set R with complement of arbitrary small measure. Then cut the Birkhoff sum according to the sequence $n_{i+1}(x)=n_i(x)+n(T^{n_i}(x))$ if $T^{n_i}(x)$ is in R, $n_{i+1}(x)=n_i(x)+1$ otherwise. A picture should make clear what is going on. The rest of the proof is routine check. Of course if you are in the business of non-standard analysis, Kamae's proof is both short and enlightening but then you need some work to get the standard statement. - Upvoted enthusiastically - I've had a lot of fun looking into these references. Thanks! –  Paul Siegel Jul 1 '10 at 10:00 I like the following interpretation of (a weaker form of) the maximal inequality. Lemma: Suppose $\int f \mathrm{d}\mu > 0$ then $\mu(x: f(x) + \cdots + f(T^{n-1}x) > 0\text{ for all }n \ge 1) > 0$. With this version of the maximal inequality Birkhoff's theorem is obvious in the ergodic case as follows: We may suppose $\int f \mathrm{d}\mu = 0$. To simplify notation set $S_n(x) = \sum_{k = 0}^{n-1}f(T^kx)$. Applying the lemma to $f+\epsilon$ we obtain that there is a positive measure set on which $\liminf_n \frac{S_n}{n} \ge -\epsilon$. Letting $\epsilon$ go to zero and using ergodicity (i.e. the liminf is constant) you obtain $\liminf_n \frac{S_n}{n} \ge 0$ almost everywhere. Do the same with $-f$ and obtain $\lim_{n}\frac{S_n}{n} = 0$ almost everywhere.$\Box$ One thing I like about this lemma is that it extends to the subadditive case. In subadditive ergodic theory the maximal inequality gives upper bounds, but not lower. But in fact the above lemma can be obtained independently and gives lower bounds. As far as I can tell it was first introduced in this context by Anders Karlsson and Gregory Margulis in "A multiplicative ergodic theorem and nonpositively curved spaces" (Communications of Mathematical Physics) 1999. Their proof of the lemma is based on a nice observation about sequences of real numbers, due to Riesz (called the Leader Lemma). - I like an answer that was shown to me by Mate Wierdl. You restate the maximal inequality this way: Let $$Mf(x) = \sup_N (1/N)\sum_{n < N}f(T^nx).$$ and first prove that $\|Mf\|_{1,\infty}\le\|f\|_1$. Here by $\|\cdot\|_{1,\infty}$ I mean the "weak $L^1$ norm" (which is not actually a norm at all) defined by $\|g\|_{1,\infty}=\sup_a a \cdot m( \{ x\colon |g|(x)\le a \})$ (the area of the biggest rectangle that fits under $|g|$). This has a beautiful (and simple and intuitive) proof if you are working on the integers and $T$ is the map $T(n)=n+1$. There is then a technique called transference that allows you to take the proof in your favourite system and transfer it to any other system. You then show that this implies that when you have a maximal inequality, the set of functions for which you get almost everywhere convergence is closed in $L^1$. (Suppose the ergodic theorem works for $f_1,f_2,\ldots$ and $\|f_n-f\|_1\to 0$. Use the maximal inequality to deduce that the ergodic theorem holds for $f$). At this stage you know that the set of $f$'s for which the ergodic theorem holds is a closed set, so you're done if you can find a dense set of $f$'s for which this works. Since you're looking for a dense set you're OK to work with $L^2$. The result is obvious for coboundaries and the orthogonal complement of the space of coboundaries is the set of invariant functions (for which the result is even more obvious). QED. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643359780311584, "perplexity": 244.1591731907115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447421.45/warc/CC-MAIN-20141017005727-00049-ip-10-16-133-185.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/451846/does-the-schrodinger-wave-function-associated-with-a-non-moving-free-particle-ch
# Does the Schrodinger wave function associated with a non-moving free particle change in time? I'm a bit confused by an answer given on this question. In the answer with the animation of a moving free (chargeless) particle and a non-moving free particle (or a free particle with a non-zero momentum expectation value and one with a zero momentum expectation value), the wave function of the non-moving free particle stays stationary. Now, I always thought that the (symmetric) wavefunction of a non-moving free particle isn't moving to the left or right, but that it does spread out in the passage of time. The expectation value of the momentum stays zero, but the wave function spreads out in space (while the spreading of the wave function in momentum space gets less (as reflected in the uncertainty principle). While the wavefunction in configuration space spreads out, the wavefunction in momentum space gets "thinner" and higher (approaching a Dirac delta function). So my simple question is: What's going on? • Maker of the GIF here: the first picture is not animated while the second one is. Figured it would be the clearest. Jun 14 '19 at 21:06 The question shouldn't be about a moving particle. Quantum mechanics has told use that trajectories aren't the right way to think about quantum particles. Indeed, the notion of a trajectory isn't a well defined concept in quantum mechanics. The time evolution of the wave function is completely governed by the Hamiltonian (in particular the Potential $$V(x)$$). Now, I always thought that the (symmetric) wavefunction of a non-moving object isn't moving to the left or right, but that it does spread out in the passage of time. The expectation value of the momentum stays zero, but the wave function spreads out in space (while the spreading of the wave function in momentum space gets less (as reflected in the uncertainty principle). You seem to be generalising a very specific kind of application of schodinger's equation to all wave functions. You also seem to be combining facts about the wave function from two different potentials. Namely the notion of a wave function spreading out. This is true for a free particle, but in general this is not the case (e.g. the eigenstates of the infinite square well). Also, in the case of the free particle, the wave packet does have a group velocity and so we can regard it as “moving”, whereas you say that the wave packet will have no group velocity yet still spread out over time. Note: this does't have anything to do a "moving" particle, but rather a “free” particle. Who said that the average momentum of a free particle has to be zero? In general, the expectation of momentum of a free particle is $$\langle k\rangle= \int_{-\infty}^{+\infty} \mathrm dk\ k\ |\phi(k)|^2 \tag{1}$$ where $$\phi(k)$$ are the fourier coefficients of the plane wave expansion $$\Psi=\frac{1}{\sqrt{2\pi}}\int \mathrm dk\ \phi(k)\;\mathrm e^{i(kx-\omega t)} \tag{2}$$ Indeed, in general the expectation of momentum of a free particle is in general nonzero. Conclusion: It doesn't really make sense to talk about "moving/nonmoving particle" wave function. The closest we can get is the group velocity of the wave packet. Moreover, the time evolution of the wave function is not governed by anything have to do with “motion”, but rather the Hamiltonian. Different potentials will determine the different time evolutions of the wave function. Given how broad your question is "What's going on", it's a bit difficult to answer you further. I can only point out the conceptual errors that I mentioned above. • I forgot to write down that I was indeed talking about a free particle. Sorry for that! And indeed, nobody says that the average momentum of a free particle has to be zero, but I assumed this in relation to the answer on a question about moving and non-moving objects. But anyhow, thanks for your answer. Jan 3 '19 at 8:18 • It make sense to talk about a non-moving particle, in the sense of zero momentum expectation value. The increase of its position uncertainty with time should not be interpreted as motion. Jan 3 '19 at 22:34 • That's fine and I agree. But it seemed that OP was thinking that a moving particle will have a particular kind of wave function, when in reality we may ascribe "motion" to any particle with a nonzero expectation of momentum (e.g. the harmonic oscillator), but this doesn't mean that it's positional uncertainty will decrease over time. Jan 3 '19 at 22:41 • I was indeed writing about a free particle that has a zero momentum expectation value (like a particle with a normalized Gaussian wavefunction associated with it). And the increase in the position uncertainty (the spreading of the Gaussian) of course doesn't imply the particle is in motion. It's only the Gaussian that's in (symmetric) motion. Jan 4 '19 at 22:41 Clearly you are talking about a free particle. The most general Schrödinger wave function then is a linear combination of plane waves $$\Psi(\vec r, t)= N \int d\vec k\ \psi(\vec k) e^{i(\vec k \cdot \vec r -\frac{\hbar k^2}{2m} t + \varphi(\vec k ))}$$ where the amplitude of each wave $$\psi(\vec k)$$ and its phase $$\varphi(\vec k)$$ are free to choose. $$N$$ is the normalisation factor. The wave function describes a particle at rest if the expectation value of $$\vec p = -i\hbar \vec \nabla$$ vanishes. Now $$<\vec p > = N \int d\vec k\ |\psi(\vec k)|^2 \hbar \vec k$$. It is clear that under this restriction $$\Psi$$ in general still depends on time. The probability density $$|\Psi(\vec r, t)|^2$$ will evolve with time. If $$|\Psi|^2$$ is a gaussian "wave packet" at $$t=0$$ then its standard deviation will increase indefinitely with time. For a detailed derivation of the spread see https://quantummechanics.ucsd.edu/ph130a/130_notes/node83.html . I think you might be misunderstanding two points about the Uncertainty Principle, and the reason I think that lies in your following sentence: The expectation value of the momentum stays zero, but the wave function spreads out in space (while the spreading of the wave function in momentum space gets less (as reflected in the uncertainty principle). The two points are these: First, the Uncertainty Principle does not apply to the expectation values of the particle in configuration space and momentum space. It applies to the expected deviations in each space from the expectation values. The principle states that the product of the expected deviations can't be smaller than a certain amount. And that leads me to the second point: The Uncertainty Principle isn't an equality, it's an inequality. While the deviations of the configuration and the momentum cannot both be arbitrarily small, the Uncertainty Principle gives us no reason why they can't both be arbitrarily large. So to finally answer your question about what's going on, the expectation value of x remains some constant point over time, the expectation value of p remains zero over time, and the expected deviation of both from their expected values increases over time. And the Uncertainty Principle, being an inequality, likes this just fine. I'm interpreting your "non moving particle" as "zero momentum expectation value". Now, I always thought that the (symmetric) wavefunction of a non-moving particle isn't moving to the left or right, but that it does spread out in the passage of time. You're right. To be precise it's not necessarily so - the wave function could also be shrinking for some time, then begin to spread. But the initial w.f. must be cleverly arranged and corresponds to a state practically impossible to prepare. The expectation value of the momentum stays zero, but the wave function spreads out in space (while the spreading of the wave function in momentum space gets less (as reflected in the uncertainty principle). The last statement is wrong. For a free particle momentum is a constant of the motion, so that all average values like $$\langle p \rangle$$ and $$\langle p^2 \rangle$$ are constant. Remember that the uncertainty relation is an inequality. Usually it's far from being an equality. In the present case just that is happening: $$\Delta x$$ increases whereas $$\Delta p$$ stays constant. I could give further details, if you want them. But at a later time, perhaps tomorrow. Edit $$\let\a=\alpha \let\b=\beta \let\dag=\dagger \def\ket#1{|#1\rangle} \def\bra#1{\langle#1|} \def\avg#1{\langle #1 \rangle} \def\D#1#2{{d#1 \over d#2}} \def\DD#1#2{{d^2#1 \over d#2^2}} \def\PD#1#2{{\partial#1 \over \partial#2}} \def\dx{\dot x} \def\ddx{\ddot x} \def\bt{\bar t}$$ It's necessary to set notations. Many choices are possible, mainly a matter of taste. I'll adopt the following: • Operators are written as simple letters (lower-case or capital) Kets and bras are labelled • by a generic label with no attached meaning (I'll use here greek letters): $$\ket\a$$ • by a time variable: $$\ket{\a,t}$$. Time evolution. We have $$\ket{\a,t} = T(t)\,\ket{\a,0} = e^{-i\,H\,t/\hbar}\,\ket{\a,0} \tag1$$ where for a free particle $$H = {p^2 \over 2m}.\tag2$$ We could compute the w.f. at a generic time $$t$$, in order to show how it gets spreading in time. The expression one finds is quite complicated - not a simple widening of its initial shape. But there is a much easier way to directly compute $$\avg x$$ and $$\avg{x^2}$$ as functions of time. We have only to switch to Heisenberg picture. Consider a matrix element of some operator $$F$$, between time dependent states $$\ket{\a,t}$$, $$\ket{\b,t}$$. We have $$\bra{\a,t}\,F\,\ket{\b,t} = \bra{\a,0}\,T^\dag(t)\,F\,T(t)\,\ket{\b,0} = \bra{\a}\,F_t\,\ket{\b}$$ where $$F_t = T^\dag(t)\,F\,T(t) \tag3$$ is Heisenberg (time dependent) operator whereas $$\ket\a$$, $$\ket\b$$ are Heisenberg (time independent) state vectors. I dropped the label "0" now useless. (Note that $$H_t=H$$ and the same holds, for a free particle, if $$F$$ is a function of $$p$$.) Differentiating (3) wrt $$t$$ and using (1), (2) we get $$i \hbar\>\D{}t\,F_t = [F_t,H] = {1 \over 2m}\,[F,p^2].$$ In particular $$i \hbar\>\D{}t\,x_t = {1 \over 2m}\,[x,p^2] = i\,{\hbar \over m}\,p.$$ $$\D{}t\,x_t = {p \over m}.\tag4$$ Taking the expectation value (EV) on $$\ket\a$$: $$\D{}t\,\avg{x_t}_\a = {1 \over m}\,\avg p_\a = 0$$ if $$\ket\a$$, as we assumed, is a "non moving" state ot the particle. Then $$\avg{x_t}_\a = \avg x_\a = 0.$$ (There is no special restriction in taking 0 as the EV for $$t=0$$; it's only a matter of choosing the coordinates' origin.) Let's compute $$\avg{x^2_t}_\a$$. The first step is $$i\,\hbar\>\D{}t\,x^2_t = {1 \over 2m}\,[x^2_t, p^2] = i\,{\hbar \over m}\,(x_t\,p + p\,x_t)$$ $$\D{}t\,x^2_t = {1 \over m}\,(x_t\,p + p\,x_t) = x_t\,\dx_t + \dx_t\,x_t.$$ A second differentiation gives $$\DD{}t\,x^2_t = x_t\,\ddx_t + \ddx_t\,x_t + 2\,(\dx_t)^2.$$ But eq. (5) implies $$\ddx_t = 0$$ ($$p$$ is a constant of the motion) so that $$\DD{}t\,x^2_t = 2\,(\dx_t)^2$$ and all further derivatives vanish. So we may write $$x^2_t = x^2 + (x\,\dx + \dx\,x)\,t + \dx^2\,t^2 = x^2 + {1 \over m}\,(x\,p + p\,x)\,t + {1 \over m^2}\,p^2\,t^2 \tag5$$ where $$x$$, $$\dx$$, $$p$$ are computed at $$t=0$$, i.e. coincide with the Schrödinger picture operators. Now it's time to examine the choice of the initial w.f. We already know it has to satisfy $$\avg p_\a = 0.$$ But we are always free to require $$\avg x_\a = 0$$ too. This is because if a randomly chosen w.f. didn't satisfy that condition, we'd only to shift the $$x$$-origin to have it satisfied. A subtler issue comes into play from $$\avg{x\,p+p\,x}_\a$$. Might we safely assume it also vanishes? To answer, remind that this operator has a constant derivative, positive definite (it's $$2\,p^2/m$$). Then if its EV doesn't vanish for $$t=0$$ it will certainly do at some other time $$\bt$$. And we have only to take as initial w.f. the one at $$t=\bt$$ to ensure $$\avg{x\,p+p\,x}_\a=0$$. But there's another way out. In order to fulfill our condition a real w.f. is enough. To see that, let's work in Schrödinger representation: \eqalign{\avg{x\,p+p\,x}_\a &= -i\hbar\!\int\!\!dx \left[\psi_\a^*\>x\,\PD{\psi_\a}x + \psi_\a^*\,\PD{}x (x\,\psi_a)\right] \cr &= -i\hbar\!\int\!\!dx \left[\psi_\a^*\>x\,\PD{\psi_\a}x - \PD{\psi_\a^*}x\>x\,\psi_a\right]\!.\cr} The integral obviously vanishes if $$\psi_\a$$ is real. Let's summarize. Taking EV's of eq. (5) $$\avg{x^2_t}_\a = \avg{x^2}_\a + {1 \over m^2}\,\avg{p^2}_\a\,t^2 \tag6$$ Eq. (6) shows that $$\avg{x^2_t}_\a$$ indefinitely increases with time, whereas $$\avg{p^2}_\a$$ stays constant. I just have to prove my previous statement: the wave function could also be shrinking for some time, then begin to spread. To show this a short parenthesis is needed, i.e. a well-known theorem. If $$\psi(x,t)$$ is a solution of TDSE for a free particle, so is $$\psi^*(x,-t)$$ The theorem holds for more general $$H$$ too, but this simple form is enough for our problem. To prove it you have only to take complex conjugate of TDSE and look at the result. Now look at eq. (6). It shows that for all $$t>0$$ $$\avg{x^2_t}_\a > \avg{x^2}_\a.$$ Then take a $$\bt>0$$ at your pleasure and define $$\psi_\b(x,0) = \psi_\a^*(x,\bt).$$ The above theorem ensures us that TDSE solved with $$\psi_\b(x,0)$$ as initial condition starts with $$\avg{x^2_0}_\b = \avg{x^2_\bt}_\a$$ and at time $$t=\bt$$ $$\avg{x^2_\bt}_\b = \avg{x^2_0}_\a < \avg{x^2_0}_\b.$$ Only for $$t>\bt$$ does $$\avg{x^2_t}_\b$$ begin to increase. • But ain't it so that if the wavefunction (for a particle with zero momentum expectation value) becomes, in the course of time, flat (zero) everywhere in space (but still being normalized), the momentum operator $\vec p = -i\hbar \vec \nabla$ operating on this function is zero everywhere, which means that the uncertainty in momentum becomes zero (while the uncertainty in (x,y,z) becomes infinite)? In other words, the wavefunction in momentum space becomes a delta function (distribution) at the origin? Jan 4 '19 at 22:59 • But ain't it so that if the wavefunction (for a particle with zero momentum expectation value) becomes, in the course of time, flat (zero) everywhere in space (but still being normalized), the momentum operator $\vec p = -i\hbar \vec \nabla$ operating on this function is zero everywhere, which means that the uncertainty in momentum becomes zero (while the uncertainty in (x,y,z) becomes infinite)? In other words, the wavefunction in momentum space becomes a Dirac delta function (distribution) at the origin? Jan 4 '19 at 23:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 84, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525364637374878, "perplexity": 221.51970910184986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00635.warc.gz"}
http://mathoverflow.net/questions/94850/is-there-a-parameterization-of-a-neighbourhood-of-x-in-mathbbrn-into-two-mu
# Is there a parameterization of a neighbourhood of $x\in\mathbb{R}^n$ into two mutually orthogonal sets of variables, with one set parameterizing a pre-defined (n-k)-dimensional submanifold containing $x$? Let $M \subset \mathbb{R}^n$ be a differentiable submanifold with co-dimension $k$. Is there a parameterization of $\mathbb{R}^n$ of a neighbourhood of $x\in M$, so that the variables parameterizing $M$ are orthogonal to the variables parameterizing the directions orthogonal to $M$, throughout the neighbourhood? In other words, I would like a parameterization of the form $\phi:U\subset \mathbb{R}^n \to \mathbb{R}^k\times \mathbb{R}^{n-k}$, such that $\phi^{-1}(0,y) \subset M$ and the variables $x\in \mathbb{R}^k$, $y\in\mathbb{R}^{n-k}$ are orthogonal in $U$, i.e. the metric tensor has the form $g_{ij} = g_{ji} = 0$ for $1\leq i \leq k$, $k+1\leq j \leq n$. In case it is relevant, the manifold I am working with is defined as the intersection of level sets $\{ y_i(x)=0\}$, $i=1\ldots k$, where each $y_i$ is smooth enough and the gradients $\{\nabla y_i\}$ are linearly independent on the level set of mutual intersection $M$. I can find a parameterization such that the orthogonality condition holds exactly at $M$, using the variables $y_i$ as the orthogonal directions, as each $\nabla y_i$ is orthogonal to $M$. However, I am wondering how to extend this to a neighbourhood of $M$? - You can extend the normal vectors of $M$ in small scale and by inverse function theorem they do not intersect with others in a small neighborhood of $M$, which is isomorphic to a neighborhood of the zero section of the normal bundle of $M$. –  Yuchen Liu Apr 22 '12 at 15:46 Yes, of course you can. This comes up in the differential-geometric proof of the tubular neighbourhood theorem, for example. You can write out them map fairly explicitly in terms of holonomy, or in your case using your gradient vectors. –  Ryan Budney Apr 22 '12 at 17:57 @Ryan, tubular neighbourhood is not good here; in this case the equitation holds only on $M$, but not in a neighborhood. –  Anton Petrunin Apr 22 '12 at 22:16 I was reminded of this question recently by a related problem, and I remembered a concept that I had not remembered when I originally saw the question, namely É. Cartan's notion of exterior square of second fundamental forms. Using that concept, I can give something of an answer to the question. It's not definitive, but it does show that, for $n{-}k$ and $k$ sufficiently large, there do exist submanifolds $N^{n-k}\subset\mathbb{R}^n$ that do not occur as a leaf of a (local) foliation whose orthogonal plane field is integrable. (This is another way to phrase the OP's problem.) Here is a description of an obstruction: Suppose that $\mathcal{F}$ and $\mathcal{G}$ are foliations of codimensions $k$ and $n{-}k$, respectively, of an open set $U\subset \mathbb{R}^n$, and suppose that they are orthogonal. Thus, if $F\subset TU$ is the rank $n{-}k$ bundle of vectors tangent to $\mathcal{F}$ and $G\subset TU$ is the rank $k$ bundle of vectors tangent to $\mathcal{G}$, then $TU=F\oplus G$ is an orthogonal direct sum. Let $Q^F$ be the section of $G\otimes\mathsf{S}^2(F)$ that gives the second fundamental forms of the leaves of $\mathcal{F}$ and let $Q^G$ be the section of $F\otimes\mathsf{S}^2(G)$ that gives the second fundamental forms of the leaves of $\mathcal{G}$. (Note that I am using the metrics on the two bundles to idenfity $F$ with $F^\ast$ and $G$ with $G^\ast$.) Cartan defined an algebraic exterior square, i.e., a quadratic map $$\sigma^F: G\otimes\mathsf{S}^2(F)\longrightarrow \Lambda^2(G)\otimes\Lambda^2(F),$$ which is just the squaring map $G\otimes\mathsf{S}^2(F)\to \bigl(G\otimes\mathsf{S}^2(F)\bigr)\otimes\bigl(G\otimes\mathsf{S}^2(F)\bigr)$, followed by contracting a pair of $F$-indices (so that the result goes into $\bigl(G\otimes F\bigr)\otimes \bigl(G\otimes F\bigr)$), and then followed by skew-symmetrizing in both the $G$- and $F$-pairs independently. (Note that this map is, of course, zero if either $k=1$ or $n{-}k=1$; otherwise it is not zero.) On the other side, there is, of course, the corresponding mapping $$\sigma^G: F\otimes\mathsf{S}^2(G)\longrightarrow \Lambda^2(G)\otimes\Lambda^2(F).$$ (Technically, $\sigma^G$ should go into $\Lambda^2(F)\otimes\Lambda^2(G)$, I guess, but I'm identifying these two tensor products in the obvious way.) With all of this defined, a short calculation with the structure equations shows that one has the identity $$\sigma^F\bigl(Q^F\bigr) + \sigma^G\bigl(Q^G\bigr) = 0\tag{1}$$ for any pair of orthogonal foliations $\mathcal{F}$ and $\mathcal{G}$ on an open set $U\subset\mathbb{R}^n$. Now, it's a matter of linear algebra to check that, when $k$ and $n{-}k$ are sufficiently large, the maps $\sigma^F$ and $-\sigma^G$ do not have the same image. This should not be surprising, since, for $k$ and $n{-}k$ sufficiently large, the space $\Lambda^2(G)\otimes\Lambda^2(F)$ is much larger than either $G\otimes\mathsf{S}^2(F)$ or $F\otimes\mathsf{S}^2(G)$, so there is plenty of room for them to have different images. Their images will also, generally, have different dimensions. For a specific example, consider the case $k=2$ and $n{-}k=4$. Then the ranks of $G$ and $F$ are $2$ and $4$ respectively. It's easy to show that, in this case, $\sigma^F$ is surjective while the image of $\sigma^G$ consists only of elements of the form $a\otimes b$, where $a\in \Lambda^2(G)$ is nonzero and $b\in\Lambda^2(F)$ is a simple $2$-form (i.e., its half-rank is either $0$ or $1$). Thus, if you choose a submanifold $N^4\subset\mathbb{R}^6$ such that the exterior square of its second fundamental form is a nondegenerate $2$-form (and the generic $4$-manifold in $\mathbb{R}^6$ will have this property on a dense open set), then $N^4$ cannot be a leaf of a (local) foliation $\mathcal{F}$ that has an orthogonal foliation $\mathcal{G}$ because, as an equation for $Q^G$, the condition $(1)$ will have no solutions on a neighborhood of $N$. Notice that $(1)$ is only the first obstruction one would encounter in trying to solve this problem with a prescribed $N^{n-k}\subset\mathbb{R}^n$. I believe that the case $n{-}k=k=2$, for which $(1)$ is first a nontrivial condition, though not involutive, will become involutive after the first prolongation, though I haven't checked the details. So this case is probably OK, at least in the analytic case. (Note that, even though this case is formally determined, as Anton pointed out in his answer, the symbol of the PDE is degenerate, which is why $(1)$ is still nontrivial in this case, so the 'determined' property is not decisive here.) However, even in the 'overdetermined' case $n{-}k=3$ and $k=2$, the equation $(1)$ is always solvable in the sense that $\sigma^F$ and $\sigma^G$ are each surjective, and it could still be (again, I haven't checked) that the system goes into involution after the first prolongation with the proper initial conditions being described in dimension $3$ (instead of dimension $4$). Thus, it's still possible, as far as I know, that every (analytic?) $N^3\subset\mathbb{R}^5$ is a leaf of a foliation that has an orthogonal foliation by surfaces. - Set $m=n-k$. If $m=1$ or $k=1$ then the answer is YES and I hope you know it. In general, you have $m\cdot k$ equations and $n=m+k$ unknowns. I.e. if $m\ge 2$ and $k>2$ or if $m> 2$ and $k\ge 2$ then the system is overdetermined. I think it should be possible to show that there are no solutions in this case. I do not see what happes in the case $m=k=2$; i.e., for 2-dimensional submanifold in $\mathbb R^4$; maybe there is a solution for any $M$. In this case, if $\phi$ is a solution for generic $M$ then for fixed $x$ the points $\phi(x,y)$ do not lie in the normal plane to $\phi(x,0)\in M$; i.e. tubular-neighborhood-construction is useless here. - I am surprised that you mark my answer as accepted; I did not give a cmplete solution even in the overdetermined case. –  Anton Petrunin Apr 23 '12 at 20:59 sorry, I guess I misunderstood the purpose of that tool, I can reverse that. Thanks for the ideas though. –  Miranda Holmes-Cerfon Apr 24 '12 at 0:18 Let me point out that the overdetermined nature of the problem when $mk > m{+}k$ is not much evidence that it can't be solved. Usually, one worries about a system being overdetermined when one is trying to solve an initial value problem on a hypersurface. However, in the overdetermined cases here, one needs to solve a system in which one only specifies initial values along a submanifold of codimension $k>1$. It could happen (and I don't know whether it does) that the appropriate codimension for initial values for this system is $k$ or less, in which case, it might be solvable anyway. –  Robert Bryant May 20 '12 at 18:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578500390052795, "perplexity": 108.06085835128627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656747.97/warc/CC-MAIN-20150417045736-00005-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/an-impossible-solution.545058/
# An Impossible Solution • Start date • #1 144 2 So, yesterday in class we were asked to try and solve the following problem: Given a, b $\in$ R with a < b, draw the graph of an example of a continuous function f such that f: [a,b] $\rightarrow$ R, f(a) = f(b), and there does not exist c $\in$ (a, b) such that f'(c) = 0. Now in class we arrived at the conclusion that there exists no such solution, however I beg to disagree. Let's suppose that a = -1 and b = 1 which satisfies the inequality. Now for f(a) to = f(b) one function comes to mind, the associated power series: f(x) = 1 + 2x + 4x2 + 8x3 + ... + 2nxn = $\frac{1}{1-2x}$ Such that, f2(1) = 1 and f2(-1) = 1 therefore, f(a) = f(b). = $\frac{d}{dx}$($\frac{1}{1-2x}$)2 = ($\frac{-4}{2x - 1}$)3 Therefore, there does not exist f'(c) = 0 If we take the limit as x $\rightarrow$ $\infty$ then f'(x) = 0 however there is no particular element c of (a, b) s.t. f'(c) = 0. My question is am I right? If not where did I go horribly wrong. Last edited: • #2 192 3 Going too fast ? Differentiation that power series and see that at x=0 f ' (x) = 0 • #3 144 2 Going too fast ? Differentiation that power series and see that at x=0 f ' (x) = 0 I accidently posted before I finished lol. • #4 gb7nash Homework Helper 805 1 This statement is indeed true. I'm not sure why you're taking a limit as x goes to infinity. In any case, if you think the statement is wrong, you need to show there exists a c in (a,b) such that f'(c) = 0 for every f(x). Unfortunately, showing one example doesn't prove anything. Given a, b $\in$ R with a < b, draw the graph of an example of a continuous function f such that f: [a,b] $\rightarrow$ R, f(a) = f(b), and there does not exist c $\in$ (a, b) such that f'(c) = 0. Consider the example f(x) = |x| and a = -1, b = 1. Does this satisfy the hypothesis? Is the conclusion satisfied? • #5 144 2 This statement is indeed true. I'm not sure why you're taking a limit as x goes to infinity. In any case, if you think the statement is wrong, you need to show there exists a c in (a,b) such that f'(c) = 0 for every f(x). Unfortunately, showing one example doesn't prove anything. Consider the example f(x) = |x| and a = -1, b = 1. Does this satisfy the hypothesis? Is the conclusion satisfied? Which statement is true? That there is no solution? Or that there is a solution? • #6 gb7nash Homework Helper 805 1 This statement: Given a, b $\in$ R with a < b, draw the graph of an example of a continuous function f such that f: [a,b] $\rightarrow$ R, f(a) = f(b), and there does not exist c $\in$ (a, b) such that f'(c) = 0. It is possible to construct an example of a continuous function f such that there does not exist a c $\in$ (a,b) such that f'(c) = 0. Consider f(x) = |x|. • #7 144 2 This statement: It is possible to construct an example of a continuous function f such that there does not exist a c $\in$ (a,b) such that f'(c) = 0. Consider f(x) = |x|. f(x) is discontinuous at 0. • #8 gb7nash Homework Helper 805 1 f(x) is discontinuous at 0. f(x) = |x| is continuous at x=0. However, it isn't differentiable at x=0. • #9 144 2 f(x) = |x| is continuous at x=0. However, it isn't differentiable at x=0. Sorry, that's what I meant. • #10 AlephZero Homework Helper 7,002 297 In your OP, f(x) = 1/(1-2x) is not continuous (in fact it is not even defined) when x = 1/2. • Last Post Replies 3 Views 3K • Last Post Replies 3 Views 2K • Last Post Replies 7 Views 3K • Last Post Replies 27 Views 18K • Last Post Replies 5 Views 2K • Last Post Replies 6 Views 2K • Last Post Replies 9 Views 2K • Last Post Replies 6 Views 882 • Last Post Replies 18 Views 11K • Last Post Replies 7 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445438385009766, "perplexity": 519.6209523021344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00581.warc.gz"}
https://math.stackexchange.com/questions/1156635/source-of-the-cosh-trick-for-laplacian-eigenfunctions-or-helmholtz-equation
# Source of the “$\cosh$ trick” for Laplacian eigenfunctions or Helmholtz equation solutions? Suppose a smooth function $f : \mathbb{R}^n \to \mathbb{R}$ satisfies the Helmholtz equation, the PDE $\Delta f + k^2 f = 0$. A while ago someone showed me a trick: Define a function $g:\mathbb{R}^{n+1} \to \mathbb{R}$ by: $$g(x_1, \ldots, x_n, x_{n+1}) = f(x_1, \ldots, x_n) \cosh (k x_{n+1})$$ It turns out that $\Delta g = 0$, so $g$ (thus, $f$) can be analyzed using tools from harmonic function theory. My question: Does this trick have a name? Is it well known? Can it be found in books? • "separation of variables", perhaps? – Omnomnomnom Feb 19 '15 at 21:15 • This is kind of the reverse, though. A new variable is introduced and the dimension is increased, and the analysis is simplified because we now deal with a harmonic function. In separation of variables, we decrease the dimension / the number of variables. – Yoni Rozenshein Feb 19 '15 at 21:19 Here is a perspective on this method. Suppose we treat $k$ itself as a coordinate, and perform a Fourier transform with respect to it. This converts the Helmholtz equation $(\Delta+k^2)f=0$ to D'Alembert's equation $(\Delta-\partial_0^2)g=0$. (I have taken $x_0$ to be the conjugate of $k$, and $g$ to be the corresponding Fourier transform of $f$.) Upon defining $x_{n+1}\equiv i x_0$, this becomes $(\Delta+\partial_{n+1}^2)g=0$ which is equivalent to the form identified above. Separating the variable $x_{n+1}$ by writing $g(\mathbf{x},x_{n+1})=f(\mathbf{x})\Phi(x_{n+1})$ then obtains $$\frac{\Delta f}{f}=-\frac{\Phi''}{\Phi}=-k^2$$ where the separation constant is chosen so as to regain the Helmholtz equation. This yields $\Phi(x_{n+1})=A\cosh(kx_{n+1})+B\sinh(kx_{n+1})$, generalizing the case of $\Phi(x_{n+1})=\cosh(kx_{n+1})$ given in the OP.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695780873298645, "perplexity": 231.22233083098152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998923.96/warc/CC-MAIN-20190619063711-20190619085711-00443.warc.gz"}
http://scholarpedia.org/article/Maps_with_vanishing_denominators
# Maps with vanishing denominators Laura Gardini et al. (2007), Scholarpedia, 2(9):3277. doi:10.4249/scholarpedia.3277 revision #137336 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Christian Mira Let $$T$$ be a two-dimensional map, denoted $$(x',y') = T(x,y) = (F(x,y),G(x,y))$$ (here $$(x',y')$$ denotes the image), where at least one of the components $$F$$ or $$G$$ has a denominator that vanishes in a one-dimensional subset of the phase plane. Consequently $$T$$ is not defined in the whole plane and there can arise singular sets called focal points (Mira [1981], Mira et al. [1996], p. 26-33) and prefocal sets . These singularities give rise to topological structures of the attractors and their basins (see First Subpage|examples page) and to bifurcations that do not occur in continuous maps. At a focal point, one of the components has the form $$0/0\ .$$ Roughly speaking, a prefocal curve is a set of points for which there exists at least one inverse, that maps (or "focalizes") the whole set into a single point, the focal point. Note that these singularities also play an important role for smooth maps which have an inverse with this property, i.e. having a denominator that vanishes on a subset of phase space (cf. examples 4 and 5 of polynomial maps in First Subpage) . ## Definitions and basic properties In order to simplify the exposition, we assume that only one of the two functions defining the map $$T$$ has a denominator which can vanish, say $\tag{1} x'=F(x,y), \ \ \ \ \ y'=G(x,y)=N(x,y)/D(x,y)$ where $$x$$ and $$y$$ real variables, $$F(x,y) \ ,$$ $$N(x,y)$$ and $$D(x,y)$$ are continuously differentiable functions ($$N$$ and $$D$$ without common factors) and defined in the whole plane $$\mathbb{R}^2\ .$$ The "set of nondefinition" of the map $$T$$ (given by the set of points where at least one denominator vanishes) reduces to $\tag{2} \delta_{s}={(x,y)\in \mathbb{R}^{2}\big|D(x,y)=0}.$ Let us assume that $$\delta_{s}$$ is given by the union of smooth curves of the plane. The successive iterations of the two-dimensional map $$T$$ are well defined provided that the initial condition belongs to the set $$E$$ given by $E=\mathbb{R}^{2}\setminus\textstyle\bigcup_{k=0}^{\infty}T^{-k}(\delta_{s})$ where $$T^{-k}(\delta_{s})$$ denotes the set of the rank-$$k$$ preimages of $$\delta_{s}\ ,$$ i.e. the set of points which are mapped into $$\delta_{s}$$ after $$k$$ applications of $$T$$ ($$T^{0}(\delta_{s})=\delta_{s}$$). Indeed, in order to generate full forward orbits by the iteration of the map $$T\ ,$$ the points of $$\delta_{s}\ ,$$ as well as all their preimages of any rank, constitute a set of zero Lebesgue measure which must be excluded from the set of initial conditions so that $$T:E\rightarrow E\ .$$ Figure 1: Map T given by (1): images of arcs crossing through a curve $$\delta_{s}$$ of non definition (denominator D(x,y)=0). Q is the focal point (N(x,y)/D(x,y)=0/0), $$\delta_{Q}$$ is the prefocal curve (x=F(Q)), $$T^{-1}$$ is the inverse map. Let us consider a bounded and smooth simple arc $$\gamma\ ,$$ parametrized as $$\gamma(\tau)\ ,$$ transverse to $$\delta_{s}\ ,$$ such that $$\gamma\cap\delta_{s}=(x_{0}, y_{0})$$ and $$\gamma(0)=(x_{0},y_{0})\ .$$ Note that $$\gamma$$ is nontangent to the set of nondefinition. The tangent case is discussed in more detail below. We are interested in the $$\gamma$$ image $$T(\gamma)\ .$$ As $$(x_{0}, y_{0})\in\delta_{s}$$ we have, according to the definition of $$\delta_{s}\ ,$$ $$D(x_{0},y_{0})=0\ .$$ If $$N(x_{0},y_{0})\neq0\ ,$$ then $$\textstyle\lim_{\tau\to 0\pm}T(\gamma(\tau))=F((x_{0}, y_{0}),\infty)$$ where $$\infty$$ means either $$+\infty$$ or $$-\infty\ .$$ This means that the image $$T(\gamma)$$ is made up of two disjoint unbounded arcs asymptotic to the line of equation $$x=F(x_{0}, y_{0})\ .$$ A different situation may occur if the point $$(x_{0}, y_{0})\in \delta_{s}$$ is such that not only the denominator but also the numerator vanishes in it, i.e. $$D(x_{0}, y_{0})=N(x_{0}, y_{0})=0\ .$$ In this case, the second component of $$T$$ assumes the form $$0/0\ .$$ This implies that the limit above may give rise to a finite value, so that the image $$T(\gamma)$$ is a bounded arc (see fig. 1a) crossing the line $$x=F(x_{0}, y_{0})$$ in the point $$F((x_{0}, y_{0}),y)\ ,$$ where $y=\textstyle\lim_{\tau\to 0}G(x(\tau),y(\tau))$ It is clear that the limiting value $$y$$ must depend on the arc $$\gamma\ .$$ Furthermore it may have a finite value along some arcs and be infinite along other ones. This leads to the following definition of focal point and prefocal curve Bischi et al. [1999]: Definition. Consider the map $$T(x,y)\rightarrow(F(x,y),N(x,y)/D(x,y))\ .$$ A point $$Q=(x_{0}, y_{0})$$ is a focal point of $$T$$ if $$D(x_{0}, y_{0})=N(x_{0}, y_{0})=0$$ and there exist smooth simple arcs $$\gamma(\tau)\ ,$$ with $$\gamma(0)=Q\ ,$$ such that $$\textstyle\lim_{\tau\to 0}T(\gamma(\tau))$$ is finite. The set of all such finite values, obtained by taking different arcs $$\gamma(\tau)$$ through $$Q\ ,$$ is the prefocal set $$\delta_{Q}\ ,$$ which belongs to $$x=F(Q)$$. Here we shall only consider simple focal points, i.e. points which are simple roots of the algebraic system $$N(x,y)=0,D(x,y)=0\ .$$ Thus a focal point $$Q=(x_{0}, y_{0})$$ is simple if $$\overline{N}_{x}\overline{D}_{y}-\overline{N}_{y}\overline{D}_{x}\neq 0\ ,$$ where $$\overline{N}_{x}=((\partial N)/(\partial x))(x_{0}, y_{0})$$ and analogously for the other partial derivatives. In this case (of a simple focal point), there exists a one-to-one correspondence between the point $$(F(Q),y)\ ,$$ in which $$T(\gamma)$$ crosses $$\delta_{Q}\ ,$$ and the slope $$m$$ of $$\gamma$$ in $$Q$$ (as shown in Bischi et al. [1999]): $m\rightarrow(F(Q),y(m)), \ \ \ \ \ with \ \ \ \ \ y(m)=(\overline{N}_{x}+m \overline{N}_{y})/(\overline{D}_{x}+m\overline{D}_{y})$ and $(F(Q),y)\rightarrow m(y), \ \ \ \ \ with \ \ \ \ \ m(y)=(\overline{D}_{x}y- \overline{N}_{x})/(\overline{N}_{y}-\overline{D}_{y}y)$ From the definition of the prefocal curve, it follows that the Jacobian $$det(DT^{-1})$$ must necessarily vanish in the points of $$\delta_{Q}\ .$$ Indeed, if the map $$T^{-1}$$ is defined in $$\delta_{Q}\ ,$$ then all the points of the line $$\delta_{Q}$$ are mapped by $$T^{-1}$$ into the focal point $$Q\ .$$ This means that $$T^{-1}$$ is not locally invertible in the points of $$\delta_{Q}\ ,$$ being it a many-to-one map, and this implies that its Jacobian cannot be nonzero in the points of $$\delta_{Q}\ .$$ From the relations given above it results that different arcs $$\gamma_{j}\ ,$$ passing through a focal point $$Q$$ with different slopes $$m_{j}\ ,$$ are mapped by $$T$$ into bounded arcs $$T(\gamma_{j})$$ crossing $$\delta_{Q}$$ in different points $$(F(Q),y(m_{j}))$$ (fig. 1b). Interesting properties are obtained if the inverse of $$T$$ (or the inverses, if $$T$$ is a noninvertible map) is (are) applied to a curve that crosses a prefocal curve. Nonsimple focal points are considered in Bischi et al. [2005], where it is shown that they are generally associated with particular bifurcations (called of class two). ## Case of an invertible map Let $$T$$ be invertible, and $$\delta_{Q}$$ a prefocal curve whose corresponding focal point is $$Q$$ (and several prefocal curves may exist, each having a corresponding focal point). Then each point sufficiently close to $$\delta_{Q}$$ has its rank-1 preimage in a neighborhood of the focal point $$Q\ .$$ If the inverse $$T^{-1}$$ is continuous along $$\delta_{Q}$$ then all the points of $$\delta_{Q}$$ are mapped by $$T^{-1}$$ in the focal point $$Q\ .$$ Roughly speaking we can say that the prefocal curve $$\delta_{Q}$$ is "focalized" by $$T^{-1}$$ in the focal point $$Q\ ,$$ i.e. $$T^{-1}(\delta_{Q})=Q\ .$$ We note that the map $$T$$ is not defined in $$Q\ ,$$ thus $$T^{-1}$$ cannot be strictly considered as an inverse of $$T$$ in the points of $$\delta_{Q}\ ,$$ even if $$T^{-1}$$ is defined in $$\delta_{Q}\ .$$ The relation given above implies that the preimages of different arcs crossing the prefocal curve $$\delta_{Q}$$ in the same point $$(F(Q),y)$$ are given by arcs all crossing the singular set through $$Q\ ,$$ and all with the same slope $$m(y)$$ in $$Q\ .$$ Indeed, consider different arcs $$\omega_{n}\ ,$$ crossing $$\delta_{Q}$$ in the same point $$(F(Q),y)$$ with different slopes, then these arcs are mapped by the inverse $$T^{-1}$$ into different arcs $$T^{-1}(\omega_{n})$$ through $$Q\ ,$$ all with the same tangent, of slope $$m(y)\ ,$$ according to the formula given above (cf. fig. 1c). They must differ by the curvature at the point $$Q\ .$$ ## Case of a non invertible map ### General considerations In the case of continuous noninvertible maps $$T\ ,$$ several focal points may be associated with a given prefocal curve $$\delta_{Q}\ ,$$ each with its own one-to-one correspondence between slopes and points. The phase space of a noninvertible map is subdivided into open regions (or zones) $$Z_{k}\ ,$$ whose points have $$k$$ distinct rank-1 preimages, obtained by the application of $$k$$ distinct inverse maps $$T_{j}^{-1}$$ (i.e. such that $$T_{j}^{-1}(x,y)= (x_{j},y_{j})\ ,$$ $$j=1,...,k$$). A specific feature of noninvertible maps is the existence of the critical set $$LC$$ defined as the locus of points having at least two coincident rank-1 preimages, located on the set of merging preimages denoted by $$LC_{-1}\ .$$ In any neighborhood of a point of $$LC_{-1}$$ there are at least two distinct points mapped by $$T$$ into the same point, so that the map $$T$$ is not locally invertible in the points of $$LC_{-1}\ ,$$ which implies that for differentiable maps the set $$LC_{-1}$$ is included in the "set $$J_0$$" of points in which the Jacobian of $$T$$ vanishes: $J_{0}={(x,y)\in \mathbb{R}^{2}\big|det(DT)=0}$ and $$LC_{-1}\subseteq J_0\ .$$ Segments of the critical curve $$LC=T(LC_{-1})$$ are boundaries that separate different regions $$Z_{k}\ ,$$ but the converse is not generally true, that is boundaries of regions $$Z_{k}\ ,$$ which are not portions of $$LC\ ,$$ may exist (this happens, for example, in polynomial maps having an inverse function with a vanishing denominator, as shown in Bischi et al. [1999]). This fact is related to the existence of a set which is mapped by $$T$$ in one point, such a set belongs to $$J_0$$ but is not critical, so that we have a strict inclusion$LC_{-1}\subset J_0\ .$ Another distinguishing feature in many noninvertible maps is the existence of a set of points, which we shall denote by $$J_{C}\ ,$$ crossing which we have a change in the sign of the Jacobian of $$T\ ,$$ $$det(DT)\ .$$ From the geometric action of the foliation of the Riemann plane we can also say that the critical set $$LC_{-1}$$ must belong to $$J_{C}\ .$$ In fact, a plane region $$U$$ which intersects $$LC_{-1}$$ is "folded" along $$LC$$ into the side with more preimages, and the two folded images have opposite orientation; this implies that the map has different sign of the Jacobian in the two portions of $$U$$ separated by $$LC_{-1}\ .$$ So, $$LC_{-1}\subseteq J_C\ .$$ From the properties of maps with a vanishing denominator it results that generally a focal point $$Q$$ belongs to the set $$\overline {LC}_{-1}\cap \delta_S\ ,$$ where $$\overline {LC}_{-1}$$ denotes the closure of $$LC_{-1}\ ,$$ but in particular bifurcation cases, in which $$\delta_S$$ belongs to $$J_{C}\ ,$$ it happens that a focal point $$Q$$ may not belong to $$LC_{-1}\ .$$ The geometric behavior and the plane's foliation are different in the two cases. This leads to two different situations, according to the fact that the focal points belong or not to the set $$LC_{-1}\ .$$ ### The focal points do not belong to the closure of the set of merging preimages $$\overline {LC}_{-1}\ .$$ The following properties have been shown in Bischi et al. [1999]. • (a) For each prefocal curve $$\delta_{Q}$$ we have $$LC \cap\delta_{Q}=\varnothing$$. • (b) If all the inverses are continuous along a prefocal curve $$\delta_{Q}\ ,$$ then the whole prefocal set $$\delta_{Q}$$ belongs to a unique region $$Z_{k}$$ in which $$k$$ inverse maps $$T_{j}^{-1}\ ,$$ $$j=1,...,k$$ are defined (cf. the link noninvertible maps). It is plain that for a prefocal $$\delta_{Q}$$ at least one inverse is defined that "focalizes" it into a focal point $$Q\ .$$ However, other inverses may exist that "focalize" it into distinct focal points, all associated with the same prefocal curve $$\delta_{Q}\ .$$ These focal points are denoted as $$Q_{j}=T_{j}^{-1}(\delta_{Q})\ ,$$ $$j=1,...,n\ ,$$ with $$n\le k\ .$$ For each focal point $$Q_{j}$$ the same results given above can be obtained with $$T^{-1}$$ replaced by $$T_{j}^{-1}\ ,$$ so that for each $$Q_{j}$$ a one-to-one correspondence $$m_{j}(y)$$ in the form given above is defined. With similar arguments it is easy to see that an arc $$\omega$$ crossing $$\delta_{Q}$$ in a point $$(F(Q),y)\ ,$$ where $$F(Q)=F(Q_{j})$$ for any $$j\ ,$$ is mapped by each $$T_{j}^{-1}$$ into an arc $$T_{j}^{-1}(\omega)\ ,$$ through the corresponding $$Q_{j}$$ with the slope $$m_{j}(y)\ .$$ If different arcs are considered, crossing $$\delta_{Q}$$ in the same point, then these are mapped by each inverse $$T_{j}^{-1}$$ into different arcs through $$Q_{j}\ ,$$ all with the same tangent. We note that property (a) given above implies that the critical curve $$LC$$ is generally asymptotic to the prefocal curves (see Figure 2b of First Subpage, also several examples are shown in Bischi et al. [1999]). ### The focal points belong to the closure of the set of merging preimages $$\overline {LC}_{-1}$$ When the focal points belong to $$\overline {LC}_{-1}$$ (closure of $$LC_{-1}$$) the "geometrical" situations of the phase plane, and the bifurcation types, are more complex (see Bischi et al. (2003)) with respect to the previous case. This is due to the fact that now $$LC$$ has contact points at finite distance with the prefocal curves. The property $$Q_{j}=T_{j}^{-1}(\delta_{Q})\ ,$$ $$j=1,...,n\ ,$$ with $$n\le k\ ,$$ does not occur. Now in the generic case a given prefocal curve $$\delta_{Q}$$ is not associated with several focal points $$Q_{j}\ .$$ Only one of the inverses $$T_{j}^{-1}$$ maps a non critical point of a given prefocal curve into its related focal point, so that we can write $$Q=T_{j}^{-1}(F(Q),y)$$ (or $$Q=T_{j}^{-1}(\delta_{Q})$$ for short), but the index $$j$$ depends on the non critical point $$(F(Q),y)$$ considered on $$\delta_{Q}\ .$$ For this reason the previous situation of $$\delta_{Q}$$ (focal points do not belong to $$\overline {LC}_{-1}$$) appears as non generic (indeed it may result from the merging of two prefocal curves $$\delta_{Q}^{r}$$ and $$\delta_{Q}^{s}$$ without merging of the corresponding focal points, as shown in Bischi et al. [2003, 2005]). Figure 2: Focalization in the section 3.3 case, for a $$Z_{0}-Z_{2}$$ noninvertible map $$T$$ (cf. the link noninvertible maps). $$LC$$ is the critical curve separating a $$Z_{0}$$ region (a point has no preimage) from a $$Z_{2}$$ one (a point has two rank-one preimages). A point of $$LC$$ has two coincident rank-one preimages located on $$LC_{-1}\ .$$ $$T_{1}^{-1}$$ and $$T_{2}^{-1}$$ are the two determinations of the inverse map. $$Q_{1}$$ and $$Q_{2}$$ are the two focal points. Each of the two determinations of the inverse map focalizes on different segments of the prefocal line $$\delta_{Q_{i}}$$ (defined by $$x=F(Q_{i})$$) $$i=1,2\ ,$$ that is $$T_{1}^{-1}(\delta^{'}_{Q_{i}})=Q_{i}\ ,$$ $$T_{2}^{-1}(\delta^{''}_{Q_{i}})=Q_{i}\ ,$$ with $$T_{2}^{-1}(\delta^{'}_{Q_{i}})\cup T_{1}^{-1}(\delta^{''}_{Q_{i}})=\pi_{i}\ .$$ A qualitative illustration is given in Fig. 2, where a situation with two prefocal curves is represented for a noninvertible map $$T\ ,$$ $$(x,y)\rightarrow (x',y')\ ,$$ of type $$Z_{0}-Z_{2}$$ (see noninvertible maps). The inverse relation $$T^{-1}(x',y')$$ has two components in the region $$Z_{2}\ ,$$ denoted by $$T_{1}^{-1}$$ and $$T_{2}^{-1}\ ,$$ and no real components in the region $$Z_{0}\ .$$ The set of nondefinition $$\delta_{s}$$ is a simple straight line, and there are two prefocal lines, $$\delta_{Q_{i}}\ ,$$ of equation $$x=F(Q_{i})\ ,$$ associated with the focal points $$Q_{i}\ ,$$ $$i=1,2\ ,$$ respectively, and $$V_{i}=LC\cap \delta_{Q_{i}}$$ are the points of tangency between $$LC$$ and the two prefocal curves. Let $$\delta^{'}_{Q_{i}}$$ be the segment of $$\delta_{Q_{i}}$$ such that $$y<y(V_{i})$$ (continuous line in Fig.2), and $$\delta^{''}_{Q_{i}}$$ the segment of $$\delta_{Q_{i}}$$ such that $$y>y(V_{i})$$ (segmented line in Fig.2). The "focalization" occurs in the following way: $T_{1}^{-1}(\delta^{'}_{Q_{i}})=Q_{i}, \ \ \ \ \ \ \ \ T_{2}^{-1}(\delta^{''}_{Q_{i}})=Q_{i}$ with $$T_{2}^{-1}(\delta^{'}_{Q_{i}})\cup T_{1}^{-1}(\delta^{''}_{Q_{i}})=\pi_{i}\ ,$$ $$i=1,2\ ,$$ being the two lines passing through the focal points $$Q_{i}$$ and tangent to $$LC_{-1}$$ at these points. When $$\delta_{Q_{1}}\rightarrow \delta_{Q_{2}}\ ,$$ due to a parameter variation, without merging of the focal points, the points $$V_{i}$$ on the prefocal curves tend to infinity, i.e. $$\delta_{Q_{1}}=\delta_{Q_{2}}$$ becomes an asymptote for $$LC\ .$$ These situations are illustrated by the Example 2 of First Subpage. ## Some dynamic properties of focal points Important effects on the geometric and dynamical properties of the map $$T$$ can be observed, due to the existence of a vanishing denominator. Indeed, a contact between a curve segment $$\gamma$$ and the singular set $$\delta_{s}$$ causes noticeable qualitative changes in the shape of the image $$T(\gamma)\ .$$ Moreover, a contact of an arc $$\omega$$ with a prefocal curve $$\delta_{Q}\ ,$$ gives rise to important qualitative changes in the shape of the preimages $$T_{j}^{-1}(\omega)\ .$$ When the arcs $$\omega$$ are portions of phase curves of the map $$T\ ,$$ such as invariant closed curves, stable or unstable sets of saddles, basin boundaries, we have that contacts between singularities of different nature generally induce important qualitative changes, which constitute new types of global bifurcations that change the structure of the attracting sets, or of their basins. In order to simplify the description of geometric and dynamic properties of maps with a vanishing denominator, and their particular global bifurcations, we assume that $$\delta_{s}$$ and $$\delta_{Q}$$ are made up of branches of simple curves of the plane. Let us describe what happens to the images of a small curve segment $$\gamma$$ when it has a tangential contact with $$\delta_{s}$$ and then crosses it in two points, and what happens to the preimages of a small curve segment $$\omega$$ when it has a contact with a prefocal curve $$\delta_{Q}$$ and then crosses it in two points. ### Action of the map Figure 3: Action of the map $$T$$ on a bounded curve segment $$\gamma\ .$$ $$\delta_{s}$$ is the set of nondefinition. Consider first a bounded curve segment $$\gamma$$ that lies entirely in a region in which no denominator of the map $$T$$ vanishes, so that the map is continuous in all the points of $$\gamma\ .$$ As the arc $$\gamma$$ is a compact subset of $$\mathbb{R}^{2}\ ,$$ also its image $$T(\gamma)$$ is compact (see the upper qualitative sketch in Fig.3). Suppose now to move $$\gamma$$ towards $$\delta_{s}\ ,$$ until it becomes tangent to it in a point $$A_{0}=(x_{0},y_{0})$$ which is not a focal point. This implies that the image $$T(\gamma)$$ is given by the union of two disjoint and unbounded branches, both asymptotic to the line $$\sigma$$ of equation $$x=F(x_{0},y_{0})\ .$$ Indeed, $$T(\gamma)=T(\gamma_{a})\cup T(\gamma_{b})\ ,$$ where $$T(\gamma_{a})$$ and $$T(\gamma_{b})$$ are the two arcs of $$\gamma$$ separated by the point $$A_{0}=\gamma\cap\delta_{s} \ .$$ The map $$T$$ is not defined in $$A_{0}$$ and the limit of $$T(x,y)$$ assumes the form $$(F(x_{0},y_{0}),\infty)$$ as $$(x,y)\rightarrow A_{0}$$ (along $$T(\gamma_{a})\ ,$$ as well as along $$T(\gamma_{b})$$). In such a situation any image of $$\gamma$$ of rank $$k>1\ ,$$ given by $$T^{k}(\gamma)\ ,$$ includes two disjoint unbounded branches, asymptotic to the rank-$$k$$ image of the line $$\sigma\ ,$$ $$T^{k}(\sigma)\ .$$ When $$\gamma$$ crosses through $$\delta_{s}$$ in two points, say $$A_{1}=(x_{1},y_{1})$$ and $$A_{2}=(x_{2},y_{2})\ ,$$ both different from focal points, then the asymptote $$\sigma$$ splits into two disjoint asymptotes $$\sigma_{1}$$ and $$\sigma_{2}$$ of equations $$x=F(x_{1},y_{1})$$ and $$x=F(x_{2},y_{2})$$ respectively, and the image $$T(\gamma)$$ is given by the union of three disjoint unbounded branches (see the lower sketch in Fig.3). When $$\gamma$$ is, for example, the local unstable manifold $$W^{u}$$ of a saddle point or saddle cycle, the qualitative change of $$T(\gamma)\ ,$$ due to a contact between $$\gamma$$ and $$\delta_{s}\ ,$$ as described above, may represent an important contact bifurcation of the map $$T\ .$$ Indeed the creation of a new unbounded branch of $$W^{u}\ ,$$ due to a contact with $$\delta_{s}\ ,$$ may cause the creation of homoclinic points, from new transverse intersections between the stable and unstable sets, $$W^{s}$$ and $$W^{u}\ ,$$ of the same saddle point (or cycle). In such a case it is worth noting that the corresponding homoclinic bifurcation does not come from a tangential contact between $$W^{u}$$ and $$W^{s}\ .$$ For maps with a vanishing denominator, this implies that homoclinic points can be created without a homoclinic tangency between $$W^{u}$$ and $$W^{s}\ ,$$ from the sudden creation of unbounded branches of $$W^{u}$$ when it crosses through $$\delta_{s}$$ (see Bischi et al. [1999]). If before the bifurcation $$W^{u}$$ is associated with a chaotic attractor, the homoclinic bifurcation resulting from the contact between $$W^{u}$$ and $$\delta_{s}$$ may gives rise to an unbounded chaotic attractive set made up of unbounded, but not diverging, chaotic trajectories (see Bischi et al. [2000]). If before the bifurcation $$W^{u}$$ is not associated with a chaotic attractor, the homoclinic bifurcation resulting from the contact between $$W^{u}$$ and $$\delta_{s}$$ may gives rise to global bifurcations of the basin (cf. Example 3 of First Subpage). If the map is noninvertible, a direct consequence of the above arguments concerns the action of the curve of nondefinition $$\delta_{s}$$ on $$LC_{-1}\ .$$ If $$\overline {LC}_{-1}$$ has $$n$$ transverse intersections with the set $$\delta_{s}$$ in non focal points $$P_{i}=(x_{i},y_{i})\ ,$$ $$i=1,..,n\ ,$$ then the critical set $$LC=T(LC_{-1})$$ includes $$(n+1)$$ disjoint unbounded branches, separated by the $$n$$ asymptotes $$\sigma_{i}$$ of equation $$x=F(x_{i},y_{i})\ ,$$ $$i=1,..,n$$. ### Action of the inverses Figure 4: Action of the inverse map $$T^{-1}$$ when the map $$T$$ is invertible. • (a) Let $$T$$ be an invertible map, $$T(x,y)=(F(x,y),N(x,y)/D(x,y))\ .$$ Consider a smooth curve segment $$\omega$$ that moves towards a prefocal curve $$\delta_{Q}$$ until it crosses through $$\delta_{Q}$$ (see Fig.4) so that only a focal point $$Q=T^{-1}(\delta_{Q})$$ is associated with $$\delta_{Q}\ .$$ The prefocal set $$\delta_{Q}$$ belongs to the line of equation $$x=F(Q)\ ,$$ and the one-to-one correspondences between slopes and points hold, as given in Section 1. When $$\omega$$ moves toward $$\delta_{Q}\ ,$$ its preimage $$\omega_{-1}=T^{-1}(\omega)$$ moves towards $$Q\ .$$ If $$\omega$$ becomes tangent to $$\delta_{Q}$$ in a point $$C=(F(Q),y_{c})\ ,$$ then $$\omega_{-1}$$ has a cusp point at $$Q\ .$$ The slope of the common tangent to the two arcs, that join at $$Q\ ,$$ is given by $$m(y_{c})\ .$$ If the curve segment $$\omega$$ moves further, so that it crosses $$\delta_{Q}$$ at two points $$F(Q,y_{1})$$ and $$F(Q,y_{2})\ ,$$ then $$\omega_{-1}$$ forms a loop with a double point at the focal point $$Q\ .$$ Indeed, the two portions of $$\omega$$ that intersect $$\delta_{Q}$$ are both mapped by $$T^{-1}$$ into arcs through $$Q\ ,$$ and the tangents to these two arcs of $$\omega_{-1}\ ,$$ issuing from the focal point, have different slopes, $$m(y_{1})$$ and $$m(y_{2})$$ respectively, according to the formulas given in Section 1. Figure 5: Action of the inverse map $$T^{-1}$$ when $$T$$ is a $$Z_{0}-Z_{2}$$ noninvertible map (cf. the link noninvertible maps). • (b) Now let $$T$$ be a noninvertible map with focal points not located on $$LC_{-1}\ .$$ In this case, $$k\ge 1$$ distinct focal points $$Q{j}\ ,$$ $$j=1,...,k\ ,$$ may be associated with a prefocal curve $$\delta_{Q}\ .$$ Then each inverse $$T_{j}^{-1}\ ,$$ $$j=1,...,k\ ,$$ gives a distinct preimage $$\omega_{-1}^{j}=T_{j}^{-1}(\omega)$$ which has a cusp point in $$Q_{j}\ ,$$ $$j=1,...,k\ ,$$ when the arc $$\omega$$ is tangent to $$\delta_{Q}\ .$$ Each preimage $$\omega_{-1}^{j}$$ gives rise to a loop in $$Q_{j}$$ when the arc $$\omega$$ intersects $$\delta_{Q}$$ in two points (see fig.5, concerning the case $$k=2$$). When $$\omega$$ is an arc belonging to a basin boundary $$\mathcal{F}\ ,$$ the qualitative modifications of the preimages $$T_{j}^{-1}(\omega)$$ of $$\omega\ ,$$ due to a tangential contact of $$\omega$$ with the prefocal curve, can be particularly important for the global dynamical properties of the map $$T\ .$$ As a frontier $$\mathcal{F}\ ,$$ generally is backward invariant, i.e. $$T^{-1}(\mathcal{F})=\mathcal{F}\ ,$$ if $$\omega$$ is an arc belonging to $$\mathcal{F}\ ,$$ then all its preimages of any rank must belong to $$\mathcal{F}\ .$$ This implies that if a portion $$\omega$$ of $$\mathcal{F}$$ has a tangential contact with a prefocal curve $$\delta_{Q}\ ,$$ then necessarily at least $$k$$ cusp points, located in the focal points $$Q_{j}\ ,$$ are included in the boundary $$\mathcal{F}\ .$$ Moreover, if the focal points $$Q_{j}$$ have preimages, then also they belong to $$\mathcal{F}\ ,$$ so that further cusps exist on $$\mathcal{F}\ ,$$ with tips at each of such preimages. It results that if the basin boundary $$\mathcal{F}$$ was smooth before the contact with the prefocal curve $$\delta_{Q}\ ,$$ such a contact gives rise to points of non smoothness, which may be infinitely many if some focal point $$Q_{j}$$ has preimages of any rank, with possibility of fractalization of $$\mathcal{F}$$ when it is nowhere smooth. When $$\mathcal{F}$$ crosses through $$\delta_{Q}$$ in two points, after the contact $$\mathcal{F}$$ must contain at least $$k$$ loops with double points in $$Q_{j}\ .$$ Also in this case, if some focal point $$Q_{j}$$ has preimages, other loops appear (even infinitely many, with possibility of fractalization) with double points in the preimages of any rank of $$Q_{j}\ ,$$ $$j=1,...,n\ .$$ • (c) Whatever be the map $$T$$ (invertible, or not, with focal points on $$\overline {LC}_{-1}$$ or not) a contact of a basin boundary with a prefocal curve gives rise to a new type of basin bifurcation that causes the creation of cusp points and, after the crossing, of loops called lobes (this usage of "lobe" is distinct from that used in transport theory), along the basin boundary. This may give rise to a very particular fractalization of the basin boundary (see Example 1 of Maps_with_vanishing_denominators/First Subpage|First Subpage). • (d) Let $$T$$ be a noninvertible map with focal points not located on $$\overline {LC}_{-1}$$. In this case, the contact of two lobes on $$LC_{-1}$$ (related to a contact of $$LC$$ with the basin boundary) gives rise to a crescent (Bischi et al. [1999]) bounded by the two focal points, from which lobes appeared. The creation of "crescents", resulting from the contact of lobes, is specific to noninvertible maps with denominator, when the focal points are not located on $$\overline {LC}_{-1}$$. It requires the intersection of the boundary with a prefocal curve (located in a region with more than one inverse), at which the lobes are created, followed by a contact with a critical curve, causing the contact and merging of the lobes. At the contact the lobes are not tangent to $$LC_{-1}\ .$$ After the contact, they merge creating the crescent (See Example 1 of Maps_with_vanishing_denominators/First Subpage|First Subpage). • (e) If $$T$$ is a noninvertible map with focal points located on $$\overline {LC}_{-1}$$, then in the generic case we have a behavior similar to that of the invertible case, in which only one focal point is associated with $$\delta_{Q}\ ,$$ but in a more complex situation with respect to the role of the components of the inverse map on $$\delta_{Q}\ ,$$ and the presence of the arcs denoted $$\delta '_{Q_{i}}$$and $$\delta ''_{Q_{i}}$$ in Fig.2. Details on this situation are given in Bischi et al. [2003]. Now a crescent does not results from the contact of two lobes, but from the contact of a lobe (issuing from a focal point) with another focal point. This situation is specific to noninvertible maps with denominator, when the focal points are located on $$\overline {LC}_{-1}\ .$$ It requires the intersection of a basin boundary with a prefocal curve, followed by the contact of the resulting lobe with a focal point. ## Further remarks The theory of focal points and prefocal curves is also useful in understanding some properties of maps defined in the whole plane $$\mathbb{R}^{2}\ ,$$ having at least one inverse map with vanishing denominator. Such maps may have the property that, among the points at which the Jacobian vanishes, there exists a curve which is mapped into a single point (see Bischi et al. [1999]). Another noticeable property of these maps is that a curve, at which the denominator of some inverse vanishes may separate regions of the phase plane characterized by a different number of preimages, even if it is not a critical curve of rank-1 (a critical curve of rank-1 is defined as a set of points having at least two merging rank-1 preimages). Such a case is shown in First Subpage with Example 5. At least one inverse is not defined on these non-critical boundary curves, due to the vanishing of some denominator. In a two-dimensional map, the role of such a curve is the analogue of an horizontal asymptote in a one-dimensional map, separating the range into intervals with different numbers of rank-1 preimages Bischi et al. [1999]. The existence of focal points of an inverse map can also cause the creation of particular attracting sets. Indeed a focal point, generated by the inverse map, may behave like a knot , where infinitely many invariant curves of an attracting set shrink into a set of isolated points. Example 4 of First Subpage shows this situation. Concerning the relations between the concepts of focal points and prefocal curve proposed in the framework of the theory of iteration of two-dimensional real maps, using the style and terminology of the theory of dynamical systems, and the concepts of exceptional locus and blow-up in the framework of the study of (single application of) rational maps in the literature on algebraic geometry, it would be very interesting to create a link between these two literature streams. Indeed it seems that the first sign of such a possible link is given in Harris [1992], where the loop situation of figure 4 is described. Nevertheless it is underlined that the article topic is not limited to rational maps (cf. the sec. 1 hypothesis). So the numerator and the denominator of the map can be transcendental functions. With complementary precautions it would be even possible to widen the sec. 1 hypothesis, for example with piecewise smooth (piecewise linear for example) functions, for which the concepts of focal point and prefocal set remain the same. ## Examples They are given in First Subpage. ## References The papers, quoted in this main page, only concern the theory. Here only the tools, for a study of the dynamic effects due to a vanishing denominator, are outlined. The concept of focal point was given first in Mira [1981]. In the algebraic geometry framework, an equivalent formulation, related to the creation of a loop, is given in Harris [1992]. If previous references from other authors about this theory are found, they will be added. The references given in Maps_with_vanishing_denominators/First Subpage|First Subpage are related to applications to particular maps, using or not the concepts of focal point and prefocal set. • Bischi, G.I., L. Gardini and C. Mira [1999]. "Maps with denominator. Part 1: some generic properties", International Journal of Bifurcation & Chaos, 9(1), 119-153. • Bischi, G.I., L. Gardini and C. Mira [2000]. "Unbounded sets of attraction", International Journal of Bifurcation & Chaos, 10(9), pp. 1437-1470. • Bischi G.I., L. Gardini and C. Mira [2003]. "Plane maps with denominator. Part II: noninvertible maps with simple focal points", International Journal of Bifurcation and Chaos 13(8), pp 2253-2277, . • Bischi, G.I., L. Gardini and C. Mira [2005]. "Plane Maps with Denominator. Part III: Non simple focal points and related bifurcations ", International Journal of Bifurcation and Chaos, 15(2), 451-496. • Harris J. [1992]. "Algebraic Geometry: a first course". Springer-Verlag New-York. • Mira C. [1981]. "Singularités non classiques d'une récurrence et d'une équation différentielle". Comptes Rendus Acad. Sci. Paris, Série I, 292,146-151. • Mira C., L. Gardini, A. Barugola and J.C. Cathala [1996]. Chaotic dynamics in two-dimensional noninvertible maps. World Scientific, Singapore, Series on Nonlinear Science, Series A, vol. 20. Internal references
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423017501831055, "perplexity": 290.1432399441529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00614.warc.gz"}
http://mathonline.wikidot.com/proofs-regarding-nested-intervals
Proofs Regarding Nested Intervals # Proofs Regarding Nested Intervals Recall from the The Nested Intervals Theorem page the definition of sequence of nested intervals $I_n = [a_n, b_n]$ where $n \in \mathbb{N}$ is such that $I_1 \supseteq I_2 \supseteq I_3 \supseteq ... \supseteq I_n \supseteq I_{n+1} \supseteq ...$. Also recall that if $A = \{ a_n : n \in \mathbb{N} \}$ and $B = \{ b_n : n \in \mathbb{N} \}$ where $\xi = \sup A$ and $\eta = \inf B$, and that $I_n = [a_n, b_n]$ is a sequence of closed bounded nested intervals then $[\xi, \eta] = \bigcap_{n=1}^{\infty} I_n$. We will now look at some proofs involving nested intervals. ## Example 1 Using the Archimedean Property, show that if $I_n = [0, \frac{1}{n}]$ then $\bigcap_{n=1}^{\infty} I_n = \{ 0 \}$. It should be rather clear that $0 \in I_n = [0, \frac{1}{n} ]$ for all $n \in \mathbb{N}$ since $0 ≤ 0 ≤ \frac{1}{n}$ for all $n \in \mathbb{N}$. It should also be clear that any number less than 0 cannot be in the intersection of these intervals since if $x < 0$ then $x \not \in I_1 = [0, 1]$. Suppose that $x > 0$ such that $x \in I_n$ for all $n \in \mathbb{N}$. By one of the Archimedean Property Corollaries, since $x > 0$ then there exists an $n_x \in \mathbb{N}$ such that $0 < \frac{1}{n_x} < x$, so then $x \not \in [0, \frac{1}{n_x}]$ which is a contradiction to our assumption that $x \in I_n$ for all $n \in \mathbb{N}$, so we have that the only number in all of these intervals is $0$. ## Example 2 Prove that if $I_n = [n, \infty)$ then $\bigcap_{n=1}^{\infty} I_n = \emptyset$. We first note that $I_1 = [1, \infty)$, $I_2 = [2, \infty)$, … is a sequence of nested intervals. Assume that $x \in \bigcap_{n=1}^{\infty} I_n$, in other words, this set is nonempty. Then for all $n \in \mathbb{N}$ we have that $n ≤ x < \infty$, which implies that $x$ is an upper bound to the set of natural numbers. We know by one of the Archimedian properties that for every positive number $x \in \mathbb{R}$ there exists a natural number $n_x \in \mathbb{N}$ such that $n_x - 1 ≤ x ≤ n_x$. This implies that $x \not \in I_{n_x} = [n_x, \infty)$ though, which is a contradiction to the assumption that $x \in \bigcap_{n=1}^{\infty} I_n$. Therefore $\bigcap_{n=1}^{\infty} I_n = \emptyset$. ## Example 3 Prove that if $I_n = [2, 3 + \frac{1}{n} ]$ then $\bigcap_{n=1}^{\infty} I_n = [2, 3]$. We note that $I_1 = [2, 4]$, $I_2 = [2, 3.5]$, … is a sequence of closed bounded nested intervals. Let $A := \{ 2 \}$ and let $B := \{ 3 + \frac{1}{n} : n \in \mathbb{N} \}$, and let $\xi = \sup A$ and $\eta = \inf B$. By the nested intervals theorem we have that $[\xi, \eta] = \bigcap_{n=1}^{\infty} I_n$. To prove that $[2, 3] = \bigcap_{n=1}^{\infty}$ we need to show that $\xi = 2$ and that $\eta = 3$. First we will show that $\sup A = \xi = 2$. The set $A = \{ 2 \}$ contains only one element and so $\sup A = \xi = 2$ We will now show that $\inf B = \eta = 3$. We must first show that $3$ is a lower bound to $B$. We note that $0 < \frac{1}{n}$ for all $n \in \mathbb{N}$. Adding $3$ to both sides of this inequality and we get that $3 < 3 + \frac{1}{n}$ for all $n \in \mathbb{N}$. Therefore $3$ is a lower bound to the set $B$. We now want to show that $3$ is the greatest lower bound. Suppose that $3 < w$. Then there exists a $b \in B$ such that $b < w$. Our first case is when $4 < w$. Then choose $b = 3 + \frac{1}{n}$ for any $n \in \mathbb{N}$ and so $b < w$. Our second case is when $3 < w ≤ 4$. We want to show that $b = 3 + \frac{1}{n}$ for some $n \in \mathbb{N}$ exists such that $b < w$. We thus want to show that $3 < b = 3 + \frac{1}{n} < w ≤ 4$ which is equivalent to showing that $0 < \frac{1}{n} < w - 3 ≤ 1$. We know that since $w - 3 > 0$ (since $3 < w ≤ 4$) then such a natural number $n \in \mathbb{N}$ exists by one of the Archimedian properties and so such a $b \in B$ exists such that $b < w$. Therefore $\inf B = \eta = 3$. By the nested intervals theorem $[2, 3] = \bigcap_{n=1}^{\infty} I_n$. ## Example 4 Prove that if $I_n = \left (1, 1 + \frac{1}{n} \right)$ then $\bigcap_{n=1}^{\infty} I_n = \emptyset$. We note that this sequence of intervals is nested since $I_1 = (1, 2)$, $I_2 = \left (1, \frac{1}{2} \right)$, … Now assume that $x \in \bigcap_{n=1}^{\infty} I_n$, in other words, that the intersection is nonempty. Then it follows that $1 < x < 1 + \frac{1}{n}$ for all $n \in \mathbb{N}$. We note that this is equivalent to $0 < x - 1 < \frac{1}{n}$ for all $n \in \mathbb{N}$. We also note that $x - 1 > 0$ and so by one of the Archimedean properties that there exists a natural number $n_x \in \mathbb{N}$ such that $0 < \frac{1}{n_x} < x - 1$ which implies that $1 < 1 + \frac{1}{n_x} < x$, which is a contradiction since then $x \not \in \left (1, \frac{1}{n_x} \right)$. So our assumption that $x \in \bigcap_{n=1}^{\infty} I_n$ was wrong and so $\bigcap_{n=1}^{\infty} I_n = \emptyset$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970636367797852, "perplexity": 32.63727694376435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00107.warc.gz"}
https://itprospt.com/num/5629629/the-following-6-questions-pertain-to-this-figure-and-associated
5 # The following 6 questions pertain to this figure and associated concepts. It shows a quantum particle in a box (the box is defined by the red boundary): This figure... ## Question ###### The following 6 questions pertain to this figure and associated concepts. It shows a quantum particle in a box (the box is defined by the red boundary): This figure shows the situation at t = 0, when there is a probability of exactly 1.0 (100%) that the particle is in the left half of the boxThe box has potential energy "barrier" running vertically down its middle (represented by the white vertical line): Using Schrodinger s Equation the time-evolution of this situation can be calculat The following 6 questions pertain to this figure and associated concepts. It shows a quantum particle in a box (the box is defined by the red boundary): This figure shows the situation at t = 0, when there is a probability of exactly 1.0 (100%) that the particle is in the left half of the box The box has potential energy "barrier" running vertically down its middle (represented by the white vertical line): Using Schrodinger s Equation the time-evolution of this situation can be calculated and examined. #### Similar Solved Questions ##### Steam with quality of 0.82 is at 1200 kPa The constant pressure is maintained while the temperature is increased to 500 "C. Estimate the specific heat of the steam under these conditions_ Steam with quality of 0.82 is at 1200 kPa The constant pressure is maintained while the temperature is increased to 500 "C. Estimate the specific heat of the steam under these conditions_... ##### Homework: Sec 8.1Score: 0 of 1 pt3 of 6 (1 complete)8.1.11Evaluate the integral below using any appropriate algebraic method or trigonometric identity:(43 3) /68 dx1 + (2x + (V3 + 3) /643 3) /68 dx(Type an exact answer: )(2x +(43 + 3) /6 Homework: Sec 8.1 Score: 0 of 1 pt 3 of 6 (1 complete) 8.1.11 Evaluate the integral below using any appropriate algebraic method or trigonometric identity: (43 3) /6 8 dx 1 + (2x + (V3 + 3) /6 43 3) /6 8 dx (Type an exact answer: ) (2x + (43 + 3) /6... ##### [0/1 Points]DETAILSPREVIOUS ANSWERSConvert the complex number from polar to rectangular form_ (Round your answer two decimal places )cos(24")+i sin(249)2 = 6.37 _ 2.87iRecall that complex number polar form represented asr(cos(8) i sin(0)) . What are the values of_[-/1 Points]DETAILSConvert the complex number from polarrectangular form;cos(240")+i sin(2409)[-/1 Points]DETAILSFind 2122 in polar fomm. cosi z)+i sin( z)); z2 = 2 ( cos( #)+isin( #))[-/1 Points]DETAILSFindin polar form.cos(1 [0/1 Points] DETAILS PREVIOUS ANSWERS Convert the complex number from polar to rectangular form_ (Round your answer two decimal places ) cos(24")+i sin(249) 2 = 6.37 _ 2.87i Recall that complex number polar form represented as r(cos(8) i sin(0)) . What are the values of_ [-/1 Points] DETAILS Co... ##### Maximize the profit P-18x+45y subject to K+2v < 16 5x +3v < 45 K20 V20 The maximum profit p= Maximize the profit P-18x+45y subject to K+2v < 16 5x +3v < 45 K20 V20 The maximum profit p=... ##### During & firework display, shell is shot into the air with an initial speed of 70.Om/s at an angle of 75.0 degrees above the horizontal (a) Caleulate the height at which the shell explodes? 6) How much time passed between the launch of the shell and the explosion What is the horizontal displacement of the shell when it explodes? During & firework display, shell is shot into the air with an initial speed of 70.Om/s at an angle of 75.0 degrees above the horizontal (a) Caleulate the height at which the shell explodes? 6) How much time passed between the launch of the shell and the explosion What is the horizontal displacem... ##### Because the P-value than the significance evei 05, there sufficient evidence to support the claim that there is linear correlation between emon imports and crash fatality rates for significance level of a = 0.05Do the results suggest that imported emons cause car fatalities?The results suggest that an increase in imported lemons causes in an increase in car fatality rates The results suggest that an increase in imported emons causes car fatality rates to remain the same. The results do not sugge Because the P-value than the significance evei 05, there sufficient evidence to support the claim that there is linear correlation between emon imports and crash fatality rates for significance level of a = 0.05 Do the results suggest that imported emons cause car fatalities? The results suggest tha... ##### IfX Y,Z are random variable having means 5, -2, and 4, variances 1,2_ and 3, and cov(X, Y) =-2, cov(Y,Z) = 1, cov(z, X) = -1. For W = 2X 3Y + 4Z,U=3X-2Y + S2 Find cov(W, U) IfX Y,Z are random variable having means 5, -2, and 4, variances 1,2_ and 3, and cov(X, Y) =-2, cov(Y,Z) = 1, cov(z, X) = -1. For W = 2X 3Y + 4Z,U=3X-2Y + S2 Find cov(W, U)... ##### The formula $c= rac{4 t}{t^{2}+1}$ gives the concentration $c$ (in milligrams per liter) of a certain dosage of medication in a patient's bloodstream $t$ hours after the medication is administered. Suppose the patient received the medication at noon. Find the concentration of medication in his blood at the following times later that afternoon.(PICTURE NOT COPY) The formula $c=\frac{4 t}{t^{2}+1}$ gives the concentration $c$ (in milligrams per liter) of a certain dosage of medication in a patient's bloodstream $t$ hours after the medication is administered. Suppose the patient received the medication at noon. Find the concentration of medication in his... ##### 2. Problem 2. 5 points) Solve the initial value problem by the method of Laplace transform. No credit if different mcthod is used.V +4 =1 9(o) =1you are using convolutions ALL integrals should be computed explicitly: 2. Problem 2. 5 points) Solve the initial value problem by the method of Laplace transform. No credit if different mcthod is used. V +4 =1 9(o) =1 you are using convolutions ALL integrals should be computed explicitly:... ##### In Fig. $22-61,$ an electron is $\begin{array}{ll}{\text { shot }} & {\text { at } \quad \text { an initial speed of }} \\ {v_{0}=2.00 \times 10^{6} \mathrm{m} / \mathrm{s},} & {\text { at angle } \theta_{0}=}\end{array}$ $40.0^{\circ}$ from an $x$ axis. It moves through a uniform electric field $\vec{E}=(5.00 \mathrm{N} / \mathrm{C}) \hat{\mathrm{j}}$ . A screen for detecting electrons is positioned parallel to the $y$ axis, at distance $x=3.00 \mathrm{m} .$ In unit-vector notation, In Fig. $22-61,$ an electron is $\begin{array}{ll}{\text { shot }} & {\text { at } \quad \text { an initial speed of }} \\ {v_{0}=2.00 \times 10^{6} \mathrm{m} / \mathrm{s},} & {\text { at angle } \theta_{0}=}\end{array}$ $40.0^{\circ}$ from an $x$ axis. It moves through a uniform electric ... ##### The waiting time X in minutes for an order at McDonalds hasexponential distribution with parameter λ = 0.25. Answer aAND b.(a) Find the probability that the waiting timefor an order exceeds 7 minutes.(b) Find the probability that the waiting timefor an order is between 2 and 8 minutes, inclusive. The waiting time X in minutes for an order at McDonalds has exponential distribution with parameter λ = 0.25. Answer a AND b. (a) Find the probability that the waiting time for an order exceeds 7 minutes. (b) Find the probability that the waiting time for an order is between 2 and 8 minutes, incl... ##### Find the indefinite integral.16 (&8*16 (e 8x _D. x + Find the indefinite integral. 16 (&8* 16 (e 8x _ D. x +... ##### Suppose IQs are normally distributed with a mean of 100 and astandard deviation of 16. What is the probability that arandomly chosen individual has an IQ that is between 90 and120? Suppose IQs are normally distributed with a mean of 100 and a standard deviation of 16. What is the probability that a randomly chosen individual has an IQ that is between 90 and 120?... ##### Math 112 - Written HW #5 Due in CANVAS: Tuesday, 21 May, @ 11.59pmName#SA) Solving _Trig Equations sin(0) = Varcsin(y) = 0a) Let the angle "A" on the unit circle have measure € rads.What is the measure ofangle Din terms of 0?What is the measure ofangle B in terms of 0?What is the measure ofangle Cin terms of 0 ?b) What do we know about the relationship betweenthe cos( B) and the cos( C ) ?Is there another pair of angles with the same relationship?If so what angles are they?the cos( A Math 112 - Written HW #5 Due in CANVAS: Tuesday, 21 May, @ 11.59pm Name #SA) Solving _Trig Equations sin(0) = V arcsin(y) = 0 a) Let the angle "A" on the unit circle have measure € rads. What is the measure ofangle Din terms of 0? What is the measure ofangle B in terms of 0? What is ... ##### Un cable de ethernet tiene 3.8Sm de largo Y una masa de 0.210kg. Se produce pulso transversal al arrancar Un extremo del cable tenso. El pulso hace cuatro viajes hacia arriba abajo hacia atras largo del cable en 0.71Ss. ;Cual es la tension del cable? Un cable de ethernet tiene 3.8Sm de largo Y una masa de 0.210kg. Se produce pulso transversal al arrancar Un extremo del cable tenso. El pulso hace cuatro viajes hacia arriba abajo hacia atras largo del cable en 0.71Ss. ;Cual es la tension del cable?... ##### Find the area of the surfaceThe part of the hyperbolic paraboloid 2 = y2 x2 that lies between the cylinders x2 + y2 = 9 and x2 + y2 = 16. Find the area of the surface The part of the hyperbolic paraboloid 2 = y2 x2 that lies between the cylinders x2 + y2 = 9 and x2 + y2 = 16....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861577570438385, "perplexity": 2466.721977407275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00641.warc.gz"}
https://www.mathalino.com/reviewer/mechanics-and-strength-of-materials/solution-to-problem-406-shear-and-moment-diagrams
# Solution to Problem 406 | Shear and Moment Diagrams Osama Dar can someone please explain why in reactions we are multiplying 9*60*18 and 3*60*18...when we could have divided the udl into 2 sections of 12 and 6 fts respectively and solve for the moments accordingly Alexander No need to divide the UDL into 2 sections because it is continuous. It would be simpler that way. You can also do what you suggest and arrived to the same result, but, if I judge it from my own perspective, I can say that it is unnecessary. This page belongs to Strength of Materials. If you need to improve your skill in finding the reactions of simple beam with uniform and concentrated loads, I suggest you browse the Engineering Mechanics section. That will brought you back to basic of finding simple reactions. Subscribe to MATHalino.com on
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197351694107056, "perplexity": 877.9436233068202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668585.12/warc/CC-MAIN-20191115042541-20191115070541-00285.warc.gz"}
https://www.physicsforums.com/threads/energy-balance-in-fusion-and-fission.518702/
# Energy balance in fusion and fission 1. Aug 2, 2011 ### Dan Tibbets I have argued with others on the Talk Polywell forum about the Nuclear Binding Energy per Nucleon vs the Total Nuclear Binding Energy per Nucleus graphs (which is the NBE/Nucleon * the atomic mass number). Despite referenced statements that, while the total binding energy increases continuously with larger nuclei, it is the binding energy / nucleon (which peaks at 62Ni) that determines the energy balance. I have argued that the first graph is the appropiate tool for predicting the exothermic or endothermic nature of a reaction, and have given multiple references like this one- http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html ,along with arguments about stellar evolution, and the need for some mechanism for both fusion and fission being exothermic under the appropriate conditions. Using the Total Binding per Nucleus graph as the predictor, I believe this is impossible. You need the minimal potential energy state/ most stable nucleus represented by 62Ni to allow this. Despite this I am ignored and / or belittled. Specifically, the discussion started when I claimed that the reaction of 62Ni + proton ---> 63Cu is an endothermic reaction and thus the Rossi Cold fusion claims were impossible based only on this basic reality. Presuming I am correct, I doubt further arguments will help (though if any one knows of a layman's level presentation that spells this out clearly, it might help). The empirical mass formula and the opposing natures of the strong force and electromagnetic forces have not helped. What may help is if someone with good physics credentials can briefly state their opinion on the issue, and specifically on the above reaction. It would be more difficult to ignore the conclusion if stated by an authority. Thanks, Dan Tibbets 2. Aug 2, 2011 ### PAllen It does appear to me that this reaction would be slightly exothermic. Practical realization is a separate issue. For your example, the mass of copper 63 is: 62.9295975 the mass of Nickel 62 is: 61.9283451 and the mass of a proton is: 1.007276466812 Thus the reaction releases a small amount of energy: the energy equivalent of .006 amu. The obstacle to this happening in a cold environment is not that it is endothermic, but that the proton can't get close enough unless it is moving fast - high temperature. Deuteron-Deuteron fusion is certainly exothermic, but that doesn't mean it happens via cold fusion. 3. Aug 2, 2011 ### Dan Tibbets Shoot, I just lost a long reply that explained things completely. Oh well, I will try again with a more brief response. The total binding energy is derived from the mass deficit. There is not consideration of the nature of the energy that makes up this missing mass. There are two primary considerations. The attractive strong force binding energy, and the repulsive electromagnetic force binding energy. They both make up the missing mass. As the bound nuclei increases in contained nucleons, the strong attractive force saturates ( approaches 0). It always increases but in smaller increments. At the same time the electromagnetic repulsion also grows, but it does not saturate as fast because it has a greater range than the strong force. Because of this there is a point where the opposing forces balance out. At this point the maximum differential in attractive force has been reached. Any further growth will increase the repulsion more than the attraction. This point is where the nucleus is the most compact, densest, most stable, and has the lowest potential energy. This point is 62Ni. Any nuclei larger or smaller than this are less stable, and have more potential energy. Remember that minimal potential energy implies that the maximum of kinetic energy has been extracted. Going towards this point yields energy, while going away requires energy input. This is why some graphs of the nuclear binding energy per nucleon (like in the provided link) are labeled with arrows that indicate the direction of exothermic energy flow. If you use the total binding energy per nucleus (which is the total binding energy or missing mass (or better called the total bound energy) you are ignoring the fact that this bound energy is actually made up of two opposing forces, and the net effect is the balance between them. The only time the total binding energy would apply is when you tear all of the nucleons off- all or nothing. When you are considering pulling off (or adding) single or a few nucleons, the balance is determined by the binding energy of those nucleons. The total binding energy assumes the binding energy per nucleon is a constant, and thus the energy balance is only due to the number of nucleons. This is not the case. It is like traveling from the sea shore to a destination at a higher elevation, with a straight even sloped road in between. But, as illustrated by the binding energy per nucleon graph, there is a mountain in the way. You spend more energy as you climb the mountain, and get it back as you descend- opposing fusion and fission energy yields (or costs if you reverse directions) that is determined by the height and location of the mountain peak, not by the start and ending elevations alone. Yes, you can still gain considerable energy, with an end product of a heavier than 62Ni nucleus in comparison to a collection of unbound nuclei, like hydrogen, but some of the steps are yielding energy, while others are costing energy. This is obvious in the binding energy per nucleon graph. How could you explain exothermic fission, if you assume fusion is always exothermic as implied by using the ever increasing mass deficit/ total binding energy. Why are nuclei limited in size? If you always get energy out by combining nucleons (lower the potential energy of the product) why are there not stable nuclei with 5,000 or 78,000,000 nucleons (neutron stars is a different issue)? The universe as we know it, star evolution and our existence would not be possible without a reasonable point where this lowest potential, most stable nucleus condition exists as a tipping point. This has bee measured as occurring at 62Ni. PS: This is shorter than the original version :) Dan Tibbets 4. Aug 2, 2011 ### Dan Tibbets PAllen, concerning probability issues, they are circumvented by Rossi's claim of a super secret catalyst that eliminates the coulomb barrier. The question of cold fusion despite the conventional coulomb barrier , whether real or not, at least when discussing deuterium - deuterium fusion by atypical pathways, there is at least a reasonable possibility of exothermic reactions. But, my contention is that Rosi's claim of exothermic fusion starting from 62Ni is impossible from an energy balance point of view. If he was talking about some other isotope of nickel, there might be some wiggle room, but not with 62Ni. Dan Tibbets 5. Aug 2, 2011 ### PAllen Sorry, but you are not correct. In some cases, with minimal (or even no) energy release on absorbing a particle, a nucleus would fission (releasing more energy), and this would be treated as fission rather than fusion. However, in this case Cu 63 is stable. If you should a beam of protons of the right energy at a Ni 62 target, you will get Cu 63 with release of energy. Fission is exothermic when the the mass of the products is less than the mass of the starting nucleus. In this case, there is nothing for Cu 63 to fission into that is exothermic. For something higher up in atomic weight, there are many possible splits that are exothermic; but not for Cu 63. 6. Aug 2, 2011 ### PAllen Note that nucleosynthesis stops not really at iron-56, but at nickel-56, which is what is produced in the chain of helium 4 captures from silicon. The next element in the chain would Zinc 60, but this reaction is endothermic. If the nickel-56 had time to decay into iron-56, then some additional fusion products would be possible. However, collapse + supernova happens way before the Nickel 56 can decay. The nickel-56 blown off then decays into iron-56. Heavier elements are formed in the supernova by neutron capture. If all of this didn't happen so fast, you could get e.g. Nickel 60 by exothermic alpha capture from iron 56. 7. Aug 2, 2011 ### Dan Tibbets There is a flaw in your reasoning. When you say 63Cu is stable, do you mean it is more stable than 62Ni, despite multiple statements to the contrary. Whether 63Cu, is relatively stable with a made up half life of 100 billion years, does that mean it is more stable than 62Ni with a half life of 500 billion years? These are made up numbers to illustrate my perspective. A 63Cu may be relatively stable. The key word here is 'relatively'. and the linking of stability to the lowest possible potential energy state. Do you accept that 62Ni is reported as the most stable, most densely packed and lowest energy nucleus possible? Again, this means this nucleus has the lowest potential energy possible. Just because 63Cu has a very long half life, does not mean that the 63Cu has a lower potential energy than 62Ni. The Gibbs free energy (nuclear physics equivalent?) can apply to the likelihood of a decay. The nucleus can be very stable, despite it not not yet having decayed into the absolute lowest potential energy state that is possible, does not imply that this nucleus is the lowest potential energy state. Consider this. A mole of 63Cu does have more total binding energy than a mole of 62Ni (ignoring the opposing natures of the makeup of this binding energy). But 63 grams of 62Ni has more total binding energy than 63 grams of 63Cu. This is a back door approach to saying that the binding energy per nucleon for a specific isotope is the important measure, despite the confusion of a greater mass deficit in the heavier isotope. This is consistant (I think) with the equivalent units. The binding energy per nucleon graph is exactly the same as the binding energy per mole. On a molar 62 grams of 62Ni/ 62 g/ mole has a higher binding energy than 63 grams of 63Cu/ 63g/mole. The grams cancel out, so 62Ni has more binding energy/ mole than one mole of 63Cu. . The proton does not have any binding energy, so the 62Ni has the all of the binding energy per unit of weight. Do this comparison with other isotopes on both sides of the 62Ni peak. On a weight basis 62Ni has the most total binding energy per unit of mass. This is another perspective, but I think it is equivalent to the binding energy per nucleon graph where 62Ni is the peak. This again illustrates that the 62Ni is the most densely packed form of nuclei that can exist before entering the degenerate matter realm. This quote sums it up. It is repeated in various sources. Note that 56Fe is used. There has been some evolution of the thinking of what is the most stable, densest isotope. It depends on some definitions. 56Fe is more abundant than 56Ni, as and endpoint for steller exothermic fusion reactions, but this because of the pathways involved with stellar nucleosynthesis and possibly other issues. http://en.wikipedia.org/wiki/Nuclear_binding_energy "Once iron is reached—a nucleus with 26 protons—this process no longer gains energy. In even heavier nuclei, we find energy is lost, not gained by adding protons. Overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than what is released by the nuclear attraction (effective mainly between close neighbors). Energy could actually by gained, however, by breaking apart nuclei heavier than iron." If you wish to see a number of references about the nuclear binding energy per nucleon graphs, look here. Click on the graphs, then close the graph image to link to the articles themselfs. Even this has not been convincing, thus my excursion to a different form to hopefully obtain a different perspective. Unfortunately, thus far the entrenched positions have not yet been breached. PS: As I fumblingly tried to describe.above. Your comparison is between 62 grams of 62Ni, and 63 grams of 63Cu. The differences in the binding energy per nucleon of these neighbors in this region is not enough to overcome this mass advantage, but when you make molar comparisons, which is an alternate label for the binding energy per nucleon -Y axis. it becomes more apparent. Dan Tibbets Last edited: Aug 2, 2011 8. Aug 2, 2011 ### PAllen Copper 63 is stable, period. There is nothing it can decay to with release of energy. I know all about binding energy per nucleon, mass per nucleon, etc. All of that is irrelevant to the energy balance of a particular reaction in the abstract (e.g. where you can posit a beam of protons of the right energy hitting a target). Binding energy per nucleon says a lot about what processes will occur with any measurable rate in a plasma or stellar interior - but this is separate from whether a particular reaction is exothermic or endothermic. Your aim was to argue that proton + nickel 62 -> copper 63 is requires net input of energy. This is simply false. Copper 63 is a lower energy state than nickel 62 *plus a proton*. If a proton has enough energy to reach the Nickel nucleus, the reaction will proceed with release of energy (gamma plus KE Copper 63 will be greater than KE total for proton and Nickel 62). 9. Aug 3, 2011 ### JeffKoch You might get more responses if you clarify, very briefly, your question (if you have a question) or the point you wish to discuss. Not many people are going to wade through all the prose, and I confess you lost me several posts ago. 10. Aug 3, 2011 ### PAllen One way to see that the statement that "Ni-62 + proton -> Cu-63 is exothermic" is not inconsistent with the statement that you would essentially never see this as a fusion reaction (while you certainly could if you shot a proton beam at a nickel target) is the following observation: In any environment with lots of Ni-62 and protons at high temperature and density, the reaction chain of 4 protons to He-4 is energetically favored by a factor of 4, and does not have a huge coulomb barrier. Thus in any fusion favoring environment, the Nickel reaction would be insignificant, probably undetectable. 11. Aug 3, 2011 ### Dan Tibbets PAllen , the claim that Cu63 has higher binding energy per nucleus is valid, but that is not the question. The question is if this is a proper measure for energy flow. My contention. is that it is not. The relevant measure is the binding energy per nucleon. In my previous post I included two links. The latter states that the binding energy/ nucleon is the same as the binding energy per mole (it is proportional) just multiply by avagadros number. Unless you contest this contention, it is obvious that on a molar basis the binding energy per mole is greater for 62Ni. As an exercise , using your analysis, compare any heavier nucleus you like to 62 Ni , There will always be excess total binding energy compared to 62Ni which you interpreted as exothermic. Try 235Ur. It has a lot more total binding energy per nucleus. Does building up to it release energy overall and/ or in the intermediat steps? If there is intermediats steps where things reverse, where is it ( hint- it is 62Ni. If you get exothermic energy while building all the way to Uranium, how can any of these reactions reversed (fission) release energy? The total binding energy is the sum of the absolute values of the opposing energies, but the stability/ potential energy of the nucleus is the difference between these two opposing forces. PS: Are you claiming the half life for 63Cu decay is infinity? The only nucleus that might fit this is hydrogen (a Proton). Even that was challenged by some theories, but the experiments have failed to show evidence of this. In any case the statement most stable in this context refers to the BOUND nucleus with the least potential energy. Dan Tibbets Last edited: Aug 3, 2011 12. Aug 3, 2011 ### Dan Tibbets JeffKoch, The basic question is if the binding energy per nucleon, which peaks at 62Ni (or 56Fe in some considerations) the appropriate way to predict the direction of energy flow (endothermic or exothermic), and the quantity of the energy absorbed or emitted when nucleons are added or removed from near or far neighbors on the chart. A list of representations of this is provided by this link: This link provides the data in tabular form: http://www.nndc.bnl.gov/amdc/masstables/Ame2003/mass.mas03 [Broken] Dan Tibbets Last edited by a moderator: May 5, 2017 13. Aug 3, 2011 ### PAllen You keep ignoring the fact that to ask about whether the reaction is exothermic you need to compare the complete starting and ending states. You need to compare whether Ni-62 and a proton is a higher or lower energy state that Cu-63. It is trivially true that Cu-63 is energetically favored over Ni-62 plus a proton. This is simply false, as unfortunately many of your statements are. Consider one typical U-235 fission reaction: n + U-235 -> Kr-92 + Ba-141 + 3n The mass on the left is 236.052595, the mass on the right is 235.86656. Thus the reaction proceeds exothermically. Same method answers all such cases. To decay there must be a decay path. There is none for Cu-63 that can occur without input of energy [not just energy barrier - there is no combination of products with lower total energy than Cu-63]. One does not normally bring in hypothetical proton decay into discussions of half life of nuclei. Last edited: Aug 3, 2011 14. Aug 3, 2011 ### PAllen Let me add a caveat to this. An isolated Cu-63 atom is strictly stable. However, a reasonable mass of Cu-63 does have a lower energy configuration, thus over essentially infinite time, it could re-arrange itself, if only by quantum fluctuations. If you will, the reaction would be: 62 Cu-63 -> 63 Ni-62 + 62 e+ + 62 neutrinos This would release energy. However, this has no bearing on anything else I've said. 15. Aug 3, 2011 ### Drakkith Staff Emeritus Isn't the key here that it would take MORE energy to cause the Hydrogen to fuse with the Nickel than the reaction itself actually releases? Trying to shove a proton towards 28 other protons is going to take a good amount of energy. I think it is obvious that all reactions where a proton fuses with a nucleus releases energy if you only look at the different masses, but the process of getting it to fuse is where all that energy goes. So while a Proton + Nickel fusion would release energy, it would not release more energy than it takes to cause the fusion in the first place. Or perhaps I am mistaken? 16. Aug 3, 2011 ### PAllen All fusion involves overcoming coulomb repulsion. That's why high temperature / density are required. It's a matter of degree - for Nickel you have enormously higher barrier than for light elements. However, if you have a proton with enough energy to get close to the nickel nucleus, it can react to produce copper, and the reaction products will return all of the proton's kinetic energy plus more. That's all it means to say the reaction is exothermic. Now, to ask if it is a plausible source of energy, you need many other conditions, which will defeat this reaction for practical use. First, the temperature needed would be absurd, even compared to other fusion reactions. Second, as I mentioned in an earlier post, this reaction would compete with other reactions that are so favored energetically and with lower barrier, that, for practical purposes this reaction wouldn't occur (but not because it is endothermic - that is simply false). Note that if you continue hitting a target with protons off appropriate energy, you will reach a point where after absorbing a proton, fission will occur. But this is way beyond copper. 17. Aug 3, 2011 ### Drakkith Staff Emeritus Are you sure that the energy produced would actually be more than the energy required to cause the fusion? I'm asking because a proton in a nucleus is always less massive than a proton not in a nucleus, so the whole missing mass thing doesn't really apply. For example, adding 2 protons to U-238 turns it into P-240. But the mass difference between the 2 is only 2.0030309 u, whereas 2 protons are 2.014552933 u. 18. Aug 3, 2011 ### PAllen I never said that the energy released by the absorption itself would be greater than the proton KE needed for approach - only that if the proton had enough KE to get close, the reaction would occur with release of energy. Spontaneous fission or other rapid decay may then occur (for your example, not for Cu-63). However, I was mistaken about one key point. The momentum of the result nucleus will be initially equal to the momentum of the incoming photon (in the nucleus rest frame), then a gamma ray will be released carrying the energy from the mass deficit away. However, it is not necessarily true that all of the kinetic energy of the proton will be returned as nucleus KE plus gamma ray. This cannot be determined just from mass and binding energy comparisons (or binding energy per nucleon). To determine if the reaction is endothermic in this systemic sense, you need to know the energy to overcome the coulomb barrier, and compare this to the released energy (from absorption). I don't know a figure for this offhand. For one thing, it depends on the distance needed for nuclear attraction to kick in (from this, it is straightforward to calculate) but I don't know what this distance is. Note that if the crank fusion process the OP is trying to refute (good!) has a magical way to overcome the coulomb barrier (which they claim), then the above issue is irrelevant (and the reaction would be guaranteed to be a net energy gain). 19. Aug 3, 2011 ### Drakkith Staff Emeritus Isn't that the whole point of this thread? Determining whether or not the reaction would release net energy or not? That would have to include the KE of the proton. 20. Aug 3, 2011 ### PAllen Yes, of course. The thread got side tracked by two things: 1) OP focus on binding energy per nucleon, which is not directly relevant to the energy balance of a particular reaction. 2) My focusing on whether the reaction would be energetically favored to occur once the proton was close to the nucleus, e.g. whether a proton beam would produce this reaction. I forgot to consider how inelastic this process was, and that it was possible for very little of the initial proton KE to be returned. So, yes, the real question comes back to the coulomb barrier and related issues. 1) The coulomb barrier should make the crank process impossible on its face. 2) If you had a plasma of Ni-62 and protons of sufficient temperature for protons to be absorbed, then: a) independent of energy balance, the reaction would be very rare because proton->...>He-4 fusion would enormously favored, energetically and by cross section (largely due to enormously lower coulomb barrier). b) If a few such reactions occurred, then they might actually cool the plasma (this is what I mistakenly thought wouldn't happen). This is due completely to the difference between the energy to overcome the coulomb barrier and the energy released by absorption. 3) If the crank mechanism could magically bypass the coulomb barrier (which they claim) to make the reaction occur at low temperatures, then, it also follows that the reaction would necessarily be exothermic. The absorption still releases the same energy; if you haven't expended energy to overcome the coulomb barrier, you get all this energy for free. Of course this is nonsense. Last edited: Aug 3, 2011
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474586606025696, "perplexity": 818.7767638154672}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866201.72/warc/CC-MAIN-20180524092814-20180524112814-00332.warc.gz"}
https://brilliant.org/practice/logarithmic-inequalities-same-base/
Algebra # Logarithmic Inequalities - Same Base Solve the logarithmic inequality $\log_3(x-9)+\log_3(x-7)<1.$ How many integer solutions does the following inequality have: $\log_{3}x^{9}+\left( \log_{3}x \right)^2 \le 10?$ If the solution to the inequality $(\log_{5} x)^2 < 5 - \log_{5} x^{4}$ is $a < x < b$, what is the value of $\frac{1}{ab}$? Solve the logarithmic inequality $\log_2 (15x-17) \ge \log_2 (x+16).$ How many positive integers $x$ satisfy the inequality $\log_{0.1} (x-15) - \log_{0.1} (301-x) > 0$? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926234483718872, "perplexity": 464.98534995312195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00055.warc.gz"}
https://physics.stackexchange.com/questions/509705/why-metric-tensor-can-be-not-covariantly-constant
# Why metric tensor can be not covariantly constant? I learn GR now, and there is a strange thing that I discovered. It is well-known, that the condition $$\nabla_{\mu}g_{\alpha \beta}=0$$ is specified, when we choose specific metric-compatible Levi-Civita connection on our manifold. Otherwise, there is a proof, which does not use properties of Levi-Civita connection at all, but only uses the fact, that covariant derivative has Leibniz rule and respect index raising rules: $$\nabla_{\alpha}A_{\mu}=\nabla_{\alpha}(g_{\mu\nu}A^{\nu})=g_{\mu\nu}\nabla_{\alpha}A^{\nu}+A^{\nu}\nabla_{\alpha}g_{\mu \nu}$$ Now we say, that for any vector $$A$$ the object $$\nabla_{\alpha}A_{\mu}$$ must be a tensor, namely $$\nabla_{\alpha}A_{\mu} \equiv D_{\alpha \mu}$$, and it must respect index raising rules, so by definition: $$g^{\mu \nu}\nabla_{\alpha}A_{\mu}=g^{\mu\nu}D_{\alpha\nu}=D_{\alpha}^{\mu}=\nabla_{\alpha}A^{\mu}\implies\nabla_{\alpha}A_{\mu}=g_{\mu\nu}\nabla_{\alpha}A^{\nu}$$ Now we immediatly conclude, that in first equation $$A^{\nu}\nabla_{\alpha}g_{\mu \nu}=0$$, and hence $$\nabla_{\alpha}g_{\mu \nu}=0$$. There must be some mistake, but I can't see it. • It is wrong that from the definition of $D$ and without assuming that the connection is not that of Levi-Civita, $\Delta^\mu_\alpha =\nabla_\alpha( A^\mu)$ as you instead assume. This is exactly what you want prove. Oct 23, 2019 at 9:13 You made a mistake when you wanted to raise one of the indices of $$D_{\alpha \nu}$$, i.e., $$g^{\mu \nu} D_{\alpha \nu} \neq D^\mu_\alpha \,,$$ since the connection is not Levi-Civita and you are not allowed to "drag" the metric in the covatiant derivative. When you write $$g^{\mu \nu} D_{\alpha \nu} = D^\mu_\alpha$$, you implicitly assume that the connection is Levi-Civita.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652789831161499, "perplexity": 167.6800226067703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00303.warc.gz"}
https://waseda.pure.elsevier.com/en/publications/dichotomy-in-a-scaling-limit-underwiener-measure-with-density
# Dichotomy in a scaling limit underwiener measure with density Research output: Contribution to journalArticlepeer-review 4 Citations (Scopus) ## Abstract In general, if the large deviation principle holds for a sequence of probability measures and its rate functional admits a unique minimizer, then the measures asymptotically concentrate in its neighborhood so that the law of large numbers follows. This paper discusses the situation that the rate functional has two distinct minimizers, for a simple model described by the pinned Wiener measures with certain densities involving a scaling. We study their asymptotic behavior and determine to which minimizers they converge based on a more precise investigation than the large deviation’s level. Original language English 173-183 11 Electronic Communications in Probability 12 https://doi.org/10.1214/ECP.v12-1271 Published - 2007 Jan 1 Yes ## Keywords • Concentration • Large deviation principle • Minimizers • Pinned Wiener measure • Scaling limit ## ASJC Scopus subject areas • Statistics and Probability • Statistics, Probability and Uncertainty
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266008734703064, "perplexity": 2784.0914641985623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00532.warc.gz"}
https://pressbooks.bccampus.ca/collegephysics/chapter/rotational-kinetic-energy-work-and-energy-revisited/
Chapter 10 Rotational Motion and Angular Momentum # 70 10.4 Rotational Kinetic Energy: Work and Energy Revisited ### Summary • Derive the equation for rotational work. • Calculate rotational kinetic energy. • Demonstrate the Law of Conservation of Energy. In this module, we will learn about work and energy associated with rotational motion. Figure 1 shows a worker using an electric grindstone propelled by a motor. Sparks are flying, and noise and vibration are created as layers of steel are pared from the pole. The stone continues to turn even after the motor is turned off, but it is eventually brought to a stop by friction. Clearly, the motor had to work to get the stone spinning. This work went into heat, light, sound, vibration, and considerable rotational kinetic energy. Work must be done to rotate objects such as grindstones or merry-go-rounds. Work was defined in Chapter 6 Uniform Circular Motion and Gravitation for translational motion, and we can build on that knowledge when considering work done in rotational motion. The simplest rotational situation is one in which the net force is exerted perpendicular to the radius of a disk (as shown in Figure 2) and remains perpendicular as the disk starts to rotate. The force is parallel to the displacement, and so the net work done is the product of the force times the arc length traveled: $\boldsymbol{\textbf{net }W=(\textbf{net }F)\Delta{s}.}$ To get torque and other rotational quantities into the equation, we multiply and divide the right-hand side of the equation by$\boldsymbol{r},$and gather terms: $\boldsymbol{\textbf{net }W=(r\textbf{ net }F)}$$\boldsymbol{\frac{\Delta{s}}{r}}.$ We recognize that$\boldsymbol{r\textbf{ net }F=\textbf{net }\tau}$and$\boldsymbol{\Delta{s}/r=\theta},$so that $\boldsymbol{\textbf{net }W=(\textbf{net }\tau)\theta.}$ This equation is the expression for rotational work. It is very similar to the familiar definition of translational work as force multiplied by distance. Here, torque is analogous to force, and angle is analogous to distance. The equation$\boldsymbol{\textbf{net }W=(\textbf{net }\tau)\theta}$is valid in general, even though it was derived for a special case. To get an expression for rotational kinetic energy, we must again perform some algebraic manipulations. The first step is to note that$\boldsymbol{\textbf{net }\tau=I\alpha},$so that $\boldsymbol{\textbf{net }W=I\alpha\theta.}$ ### MAKING CONNECTIONS Work and energy in rotational motion are completely analogous to work and energy in translational motion, first presented in Chapter 6 Uniform Circular Motion and Gravitation. Now, we solve one of the rotational kinematics equations for$\boldsymbol{\alpha\theta}.$We start with the equation $\boldsymbol{\omega^2=\omega_0^2+2\alpha\theta}.$ Next, we solve for$\boldsymbol{\alpha\theta}:$ $\boldsymbol{\alpha\theta\:=}$$\boldsymbol{\frac{\omega^2-\omega_0^2}{2}}.$ Substituting this into the equation for net$\boldsymbol{W}$and gathering terms yields $\boldsymbol{\textbf{net }W\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2\:-}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega_0^2}.$ This equation is the work-energy theorem for rotational motion only. As you may recall, net work changes the kinetic energy of a system. Through an analogy with translational motion, we define the term$\boldsymbol{(\frac{1}{2})I\omega^2}$to be rotational kinetic energy$\textbf{KE}_{\textbf{rot}}$for an object with a moment of inertia$\boldsymbol{I}$and an angular velocity$\boldsymbol{\omega}:$ $\textbf{KE}_{\textbf{rot}}\boldsymbol{=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2}.$ The expression for rotational kinetic energy is exactly analogous to translational kinetic energy, with$\boldsymbol{I}$being analogous to$\boldsymbol{m}$and$\boldsymbol{\omega}$to$\boldsymbol{v}.$Rotational kinetic energy has important effects. Flywheels, for example, can be used to store large amounts of rotational kinetic energy in a vehicle, as seen in Figure 3. ### Example 1: Calculating the Work and Energy for Spinning a Grindstone Consider a person who spins a large grindstone by placing her hand on its edge and exerting a force through part of a revolution as shown in Figure 4. In this example, we verify that the work done by the torque she exerts equals the change in rotational energy. (a) How much work is done if she exerts a force of 200 N through a rotation of$\boldsymbol{1.00\textbf{ rad}(57.3^0)}?$The force is kept perpendicular to the grindstone’s 0.320-m radius at the point of application, and the effects of friction are negligible. (b) What is the final angular velocity if the grindstone has a mass of 85.0 kg? (c) What is the final rotational kinetic energy? (It should equal the work.) Strategy To find the work, we can use the equation$\boldsymbol{\textbf{net }W=(\textbf{net }\tau)\theta}.$We have enough information to calculate the torque and are given the rotation angle. In the second part, we can find the final angular velocity using one of the kinematic relationships. In the last part, we can calculate the rotational kinetic energy from its expression in$\boldsymbol{\textbf{KE}_{\textbf{rot}}=\frac{1}{2}I\omega^2}.$ Solution for (a) The net work is expressed in the equation $\boldsymbol{\textbf{net }W=(\textbf{net }\tau)\theta},$ where net$\boldsymbol{\tau}$is the applied force multiplied by the radius$\boldsymbol{(rF)}$because there is no retarding friction, and the force is perpendicular to$\boldsymbol{r}.$The angle$\boldsymbol{\theta}$is given. Substituting the given values in the equation above yields $\begin{array}{lcl} \boldsymbol{\textbf{net }W} & \boldsymbol{=} & \boldsymbol{rF\theta=(0.320\textbf{ m})(200\textbf{ N})(1.00\textbf{ rad})} \\ {} & \boldsymbol{=} & \boldsymbol{64.0\textbf{ N}\cdotp\textbf{m.}} \end{array}$ Noting that$\boldsymbol{1\textbf{ N}\cdotp\textbf{m}=1\textbf{ J}},$ $\boldsymbol{\textbf{net }W=64.0\textbf{ J.}}$ Solution for (b) To find$\boldsymbol{\omega}$from the given information requires more than one step. We start with the kinematic relationship in the equation $\boldsymbol{\omega^2=\omega_0^2+2\alpha\theta}.$ Note that$\boldsymbol{\omega_0=0}$because we start from rest. Taking the square root of the resulting equation gives $\boldsymbol{\omega=(2\alpha\theta)^{1/2}}.$ Now we need to find$\boldsymbol{\alpha}.$One possibility is $\boldsymbol{\alpha\:=}$$\boldsymbol{\frac{\textbf{net }\tau}{I}},$ where the torque is $\boldsymbol{\textbf{net }\tau=rF=(0.320\textbf{ m})(200\textbf{ N})=64.0\textbf{ N}\cdotp\textbf{m}}.$ The formula for the moment of inertia for a disk is found in Making Connections: $\boldsymbol{I\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{MR^2=0.5(85.0\textbf{ kg})(0.320\textbf{ m})^2=4.352\textbf{ kg}\cdotp\textbf{m}^2.}$ Substituting the values of torque and moment of inertia into the expression for$\boldsymbol{\alpha},$we obtain $\boldsymbol{\alpha\:=}$$\boldsymbol{\frac{64.0\textbf{ N}\cdotp\textbf{m}}{4.352\textbf{ kg}\cdotp\textbf{m}^2}}$$\boldsymbol{=14.7}$$\boldsymbol{\frac{rad}{s^2}}.$ Now, substitute this value and the given value for$\boldsymbol{\theta}$into the above expression for$\boldsymbol{\omega}:$ $\boldsymbol{\omega=(2\alpha\theta)^{1/2}=[2(14.7\frac{rad}{s^2})(1.00\textbf{ rad})]^{1/2}=5.42\frac{rad}{s}}.$ Solution for (c) The final rotational kinetic energy is $\boldsymbol{\textbf{KE}_{\textbf{rot}}\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2}.$ Both$\boldsymbol{I}$and$\boldsymbol{\omega}$were found above. Thus, $\boldsymbol{\textbf{KE}_{\textbf{rot}}=(0.5)(4.352\textbf{ kg}\cdotp\textbf{m}^2)(5.42\textbf{ rad/s})^2=64.0\textbf{ J}}.$ Discussion The final rotational kinetic energy equals the work done by the torque, which confirms that the work done went into rotational kinetic energy. We could, in fact, have used an expression for energy instead of a kinematic relation to solve part (b). We will do this in later examples. Helicopter pilots are quite familiar with rotational kinetic energy. They know, for example, that a point of no return will be reached if they allow their blades to slow below a critical angular velocity during flight. The blades lose lift, and it is impossible to immediately get the blades spinning fast enough to regain it. Rotational kinetic energy must be supplied to the blades to get them to rotate faster, and enough energy cannot be supplied in time to avoid a crash. Because of weight limitations, helicopter engines are too small to supply both the energy needed for lift and to replenish the rotational kinetic energy of the blades once they have slowed down. The rotational kinetic energy is put into them before takeoff and must not be allowed to drop below this crucial level. One possible way to avoid a crash is to use the gravitational potential energy of the helicopter to replenish the rotational kinetic energy of the blades by losing altitude and aligning the blades so that the helicopter is spun up in the descent. Of course, if the helicopter’s altitude is too low, then there is insufficient time for the blade to regain lift before reaching the ground. ### PROBLEM-SOLVING STRATEGY FOR ROTATIONAL ENERGY 1. Determine that energy or work is involved in the rotation. 2. Determine the system of interest. A sketch usually helps. 3. Analyze the situation to determine the types of work and energy involved. 4. For closed systems, mechanical energy is conserved. That is,$\boldsymbol{\textbf{KE}_{\textbf{i}}+\textbf{PE}_{\textbf{i}}=\textbf{KE}_{\textbf{f}}+\textbf{PE}_{\textbf{f}}}.$Note that$\textbf{KE}_{\textbf{i}}$and$\textbf{KE}_{\textbf{f}}$may each include translational and rotational contributions. 5. For open systems, mechanical energy may not be conserved, and other forms of energy (referred to previously as$\boldsymbol{OE}$), such as heat transfer, may enter or leave the system. Determine what they are, and calculate them as necessary. 6. Eliminate terms wherever possible to simplify the algebra. 7. Check the answer to see if it is reasonable. ### Example 2: Calculating Helicopter Energies A typical small rescue helicopter, similar to the one in Figure 5, has four blades, each is 4.00 m long and has a mass of 50.0 kg. The blades can be approximated as thin rods that rotate about one end of an axis perpendicular to their length. The helicopter has a total loaded mass of 1000 kg. (a) Calculate the rotational kinetic energy in the blades when they rotate at 300 rpm. (b) Calculate the translational kinetic energy of the helicopter when it flies at 20.0 m/s, and compare it with the rotational energy in the blades. (c) To what height could the helicopter be raised if all of the rotational kinetic energy could be used to lift it? Strategy Rotational and translational kinetic energies can be calculated from their definitions. The last part of the problem relates to the idea that energy can change form, in this case from rotational kinetic energy to gravitational potential energy. Solution for (a) The rotational kinetic energy is $\boldsymbol{\textbf{KE}_{\textbf{rot}}\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2}.$ We must convert the angular velocity to radians per second and calculate the moment of inertia before we can find$\textbf{KE}_{\textbf{rot}}.$The angular velocity$\boldsymbol{\omega}$is $\boldsymbol{\omega\:=}$$\boldsymbol{\frac{300\textbf{ rev}}{1.00\textbf{ min}}}$$\boldsymbol{\cdotp}$$\boldsymbol{\frac{2\pi\textbf{ rad}}{1\textbf{ rev}}}$$\boldsymbol{\cdotp}$$\boldsymbol{\frac{1.00\textbf{ min}}{60.0\textbf{ s}}}$$\boldsymbol{=31.4}$$\frac{\textbf{rad}}{\textbf{s}}.$ The moment of inertia of one blade will be that of a thin rod rotated about its end, found in Making Connections. The total$\boldsymbol{I}$is four times this moment of inertia, because there are four blades. Thus, $\boldsymbol{I=4}$$\boldsymbol{\frac{M\ell^2}{3}}$$\boldsymbol{=4\times}$$\boldsymbol{\frac{(50.0\textbf{ kg})(4.00\textbf{ m})^2}{3}}$$\boldsymbol{=1067\textbf{ kg}\cdotp\textbf{m}^2}.$ Entering$\boldsymbol{\omega}$and$\boldsymbol{I}$into the expression for rotational kinetic energy gives $\begin{array}{lcl} \textbf{KE}_{\textbf{rot}} & \boldsymbol{=} & \boldsymbol{0.5(1067\textbf{ kg}\cdotp\textbf{m}^2)(31.4\textbf{ rad/s})^2} \\ {} & \boldsymbol{=} & \boldsymbol{5.26\times10^5\textbf{ J}} \end{array}$ Solution for (b) Translational kinetic energy was defined in Chapter 6 Uniform Circular Motion and Gravitation. Entering the given values of mass and velocity, we obtain $\boldsymbol{\textbf{KE}_{\textbf{trans}}\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{mv^2=0.5(1000\textbf{ kg})(20.0\textbf{ m/s})^2=2.00\times10^5\textbf{ J.}}$ To compare kinetic energies, we take the ratio of translational kinetic energy to rotational kinetic energy. This ratio is $\boldsymbol{\frac{2.00\times10^5\textbf{ J}}{5.26\times10^5\textbf{ J}}}$$\boldsymbol{=\:0.380.}$ Solution for (c) At the maximum height, all rotational kinetic energy will have been converted to gravitational energy. To find this height, we equate those two energies: $\boldsymbol{\textbf{KE}_{\textbf{rot}}=\textbf{PE}_{\textbf{grav}}}$ or $\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2=mgh.}$ We now solve for$\boldsymbol{h}$and substitute known values into the resulting equation $\boldsymbol{h\:=}$$\boldsymbol{\frac{\frac{1}{2}I\omega^2}{mg}}$$\boldsymbol{=}$$\boldsymbol{\frac{5.26\times10^5\textbf{ J}}{(1000\textbf{ kg})(9.80\textbf{ m/s})^2}}$$\boldsymbol{=\:53.7\textbf{ m.}}$ Discussion The ratio of translational energy to rotational kinetic energy is only 0.380. This ratio tells us that most of the kinetic energy of the helicopter is in its spinning blades—something you probably would not suspect. The 53.7 m height to which the helicopter could be raised with the rotational kinetic energy is also impressive, again emphasizing the amount of rotational kinetic energy in the blades. ### MAKING CONNECTIONS Conservation of energy includes rotational motion, because rotational kinetic energy is another form of$\textbf{KE}.$Chapter 6 Uniform Circular Motion and Gravitation has a detailed treatment of conservation of energy. # How Thick Is the Soup? Or Why Don’t All Objects Roll Downhill at the Same Rate? One of the quality controls in a tomato soup factory consists of rolling filled cans down a ramp. If they roll too fast, the soup is too thin. Why should cans of identical size and mass roll down an incline at different rates? And why should the thickest soup roll the slowest? The easiest way to answer these questions is to consider energy. Suppose each can starts down the ramp from rest. Each can starting from rest means each starts with the same gravitational potential energy$\textbf{PE}_{\textbf{grav}},$ which is converted entirely to$\textbf{KE},$provided each rolls without slipping.$\textbf{KE},$however, can take the form of$\textbf{KE}_{\textbf{trans}}$or$\textbf{KE}_{\textbf{rot}},$and total$\textbf{KE}$is the sum of the two. If a can rolls down a ramp, it puts part of its energy into rotation, leaving less for translation. Thus, the can goes slower than it would if it slid down. Furthermore, the thin soup does not rotate, whereas the thick soup does, because it sticks to the can. The thick soup thus puts more of the can’s original gravitational potential energy into rotation than the thin soup, and the can rolls more slowly, as seen in Figure 6. Assuming no losses due to friction, there is only one force doing work—gravity. Therefore the total work done is the change in kinetic energy. As the cans start moving, the potential energy is changing into kinetic energy. Conservation of energy gives $\boldsymbol{\textbf{PE}_{\textbf{i}}=\textbf{KE}_{\textbf{f}}}.$ More specifically, $\boldsymbol{\textbf{PE}_{\textbf{grav}}=\textbf{KE}_{\textbf{trans}}+\textbf{KE}_{\textbf{rot}}}$ or $\boldsymbol{mgh\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{mv^2\:+}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2}.$ So, the initial$\boldsymbol{mgh}$is divided between translational kinetic energy and rotational kinetic energy; and the greater$\boldsymbol{I}$is, the less energy goes into translation. If the can slides down without friction, then$\boldsymbol{\omega=0}$and all the energy goes into translation; thus, the can goes faster. ### TAKE-HOME EXPERIMENT Locate several cans each containing different types of food. First, predict which can will win the race down an inclined plane and explain why. See if your prediction is correct. You could also do this experiment by collecting several empty cylindrical containers of the same size and filling them with different materials such as wet or dry sand. ### Example 3: Calculating the Speed of a Cylinder Rolling Down an Incline Calculate the final speed of a solid cylinder that rolls down a 2.00-m-high incline. The cylinder starts from rest, has a mass of 0.750 kg, and has a radius of 4.00 cm. Strategy We can solve for the final velocity using conservation of energy, but we must first express rotational quantities in terms of translational quantities to end up with vv as the only unknown. Solution Conservation of energy for this situation is written as described above: $\boldsymbol{mgh\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{mv^2\:+}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2}.$ Before we can solve for$\boldsymbol{v},$we must get an expression for$\boldsymbol{I}$from Chapter 10.3 Making Connections. Because$\boldsymbol{v}$and$\boldsymbol{\omega}$are related (note here that the cylinder is rolling without slipping), we must also substitute the relationship$\boldsymbol{\omega=v/R}$into the expression. These substitutions yield $\boldsymbol{mgh\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{mv^2\:+}$$\boldsymbol{\frac{1}{2}(\frac{1}{2}}$$\boldsymbol{mR^2}$$\boldsymbol{)(\frac{v^2}{R^2})}.$ Interestingly, the cylinder’s radius$\boldsymbol{R}$and mass$\boldsymbol{m}$cancel, yielding $\boldsymbol{gh\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{v^2\:+}$$\boldsymbol{\frac{1}{4}}$$\boldsymbol{v^2\:=}$$\boldsymbol{\frac{3}{4}}$$\boldsymbol{v^2}.$ Solving algebraically, the equation for the final velocity$\boldsymbol{v}$gives $\boldsymbol{v\:=}$$\boldsymbol{(\frac{4gh}{3})^{1/2}}.$ Substituting known values into the resulting expression yields $\boldsymbol{v\:=}$$[$$\boldsymbol{\frac{4(9.80\textbf{ m/s}^2)(2.00\textbf{ m})}{3}}$$]$$\boldsymbol{^{1/2}=5.11\textbf{ m/s}.}$ Discussion Because$\boldsymbol{m}$and$\boldsymbol{R}$cancel, the result$\boldsymbol{v=(\frac{4}{3}gh)^{1/2}}$is valid for any solid cylinder, implying that all solid cylinders will roll down an incline at the same rate independent of their masses and sizes. (Rolling cylinders down inclines is what Galileo actually did to show that objects fall at the same rate independent of mass.) Note that if the cylinder slid without friction down the incline without rolling, then the entire gravitational potential energy would go into translational kinetic energy. Thus,$\boldsymbol{\frac{1}{2}mv^2=mgh}$and$\boldsymbol{v=(2gh)^{1/2}},$which is 22% greater than$\boldsymbol{(4gh/3)^{1/2}}.$That is, the cylinder would go faster at the bottom. ### Check Your Understanding Analogy of Rotational and Translational Kinetic Energy 1: Is rotational kinetic energy completely analogous to translational kinetic energy? What, if any, are their differences? Give an example of each type of kinetic energy. ### PHET EXPLORATIONS: MY SOLAR SYSTEM Build your own system of heavenly bodies and watch the gravitational ballet. With this orbit simulator, you can set initial positions, velocities, and masses of 2, 3, or 4 bodies, and then see them orbit each other. # Section Summary • The rotational kinetic energy$\textbf{KE}_{\textbf{rot}}$for an object with a moment of inertia$\boldsymbol{I}$and an angular velocity$\boldsymbol{\omega}$is given by $\boldsymbol{\textbf{KE}_{\textbf{rot}}\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2}.$ • Helicopters store large amounts of rotational kinetic energy in their blades. This energy must be put into the blades before takeoff and maintained until the end of the flight. The engines do not have enough power to simultaneously provide lift and put significant rotational energy into the blades. • Work and energy in rotational motion are completely analogous to work and energy in translational motion. • The equation for the work-energy theorem for rotational motion is, $\boldsymbol{\textbf{ net }W\:=}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega^2\:-}$$\boldsymbol{\frac{1}{2}}$$\boldsymbol{I\omega_0^2}.$ ### Conceptual Questions 1: Describe the energy transformations involved when a yo-yo is thrown downward and then climbs back up its string to be caught in the user’s hand. 2: What energy transformations are involved when a dragster engine is revved, its clutch let out rapidly, its tires spun, and it starts to accelerate forward? Describe the source and transformation of energy at each step. 3: The Earth has more rotational kinetic energy now than did the cloud of gas and dust from which it formed. Where did this energy come from? ### Problems & Exercises 1: This problem considers energy and work aspects of Chapter 10.3 Example 1—use data from that example as needed. (a) Calculate the rotational kinetic energy in the merry-go-round plus child when they have an angular velocity of 20.0 rpm. (b) Using energy considerations, find the number of revolutions the father will have to push to achieve this angular velocity starting from rest. (c) Again, using energy considerations, calculate the force the father must exert to stop the merry-go-round in two revolutions 2: What is the final velocity of a hoop that rolls without slipping down a 5.00-m-high hill, starting from rest? 3: (a) Calculate the rotational kinetic energy of Earth on its axis. (b) What is the rotational kinetic energy of Earth in its orbit around the Sun? 4: Calculate the rotational kinetic energy in the motorcycle wheel (Figure 6) if its angular velocity is 120 rad/s. Assume M = 12.0 kg, R1 = 0.280 m, and R2 = 0.330 m. 5: A baseball pitcher throws the ball in a motion where there is rotation of the forearm about the elbow joint as well as other movements. If the linear velocity of the ball relative to the elbow joint is 20.0 m/s at a distance of 0.480 m from the joint and the moment of inertia of the forearm is$\boldsymbol{0.500\textbf{ kg}\cdotp\textbf{m}^2},$what is the rotational kinetic energy of the forearm? 6: While punting a football, a kicker rotates his leg about the hip joint. The moment of inertia of the leg is$\boldsymbol{3.75\textbf{ kg}\cdotp\textbf{m}^2}$and its rotational kinetic energy is 175 J. (a) What is the angular velocity of the leg? (b) What is the velocity of tip of the punter’s shoe if it is 1.05 m from the hip joint? (c) Explain how the football can be given a velocity greater than the tip of the shoe (necessary for a decent kick distance). 7: A bus contains a 1500 kg flywheel (a disk that has a 0.600 m radius) and has a total mass of 10,000 kg. (a) Calculate the angular velocity the flywheel must have to contain enough energy to take the bus from rest to a speed of 20.0 m/s, assuming 90.0% of the rotational kinetic energy can be transformed into translational energy. (b) How high a hill can the bus climb with this stored energy and still have a speed of 3.00 m/s at the top of the hill? Explicitly show how you follow the steps in the Problem-Solving Strategy for Rotational Energy. 8: A ball with an initial velocity of 8.00 m/s rolls up a hill without slipping. Treating the ball as a spherical shell, calculate the vertical height it reaches. (b) Repeat the calculation for the same ball if it slides up the hill without rolling. 9: While exercising in a fitness center, a man lies face down on a bench and lifts a weight with one lower leg by contacting the muscles in the back of the upper leg. (a) Find the angular acceleration produced given the mass lifted is 10.0 kg at a distance of 28.0 cm from the knee joint, the moment of inertia of the lower leg is$\boldsymbol{0.900\textbf{ kg}\cdotp\textbf{m}^2},$the muscle force is 1500 N, and its effective perpendicular lever arm is 3.00 cm. (b) How much work is done if the leg rotates through an angle of$\boldsymbol{20.0^0}$with a constant force exerted by the muscle? 10: To develop muscle tone, a woman lifts a 2.00-kg weight held in her hand. She uses her biceps muscle to flex the lower arm through an angle of$\boldsymbol{60.0^0}.$(a) What is the angular acceleration if the weight is 24.0 cm from the elbow joint, her forearm has a moment of inertia of$\boldsymbol{0.250\textbf{ kg}\cdotp\textbf{m}^2},$and the net force she exerts is 750 N at an effective perpendicular lever arm of 2.00 cm? (b) How much work does she do? 11: Consider two cylinders that start down identical inclines from rest except that one is frictionless. Thus one cylinder rolls without slipping, while the other slides frictionlessly without rolling. They both travel a short distance at the bottom and then start up another incline. (a) Show that they both reach the same height on the other incline, and that this height is equal to their original height. (b) Find the ratio of the time the rolling cylinder takes to reach the height on the second incline to the time the sliding cylinder takes to reach the height on the second incline. (c) Explain why the time for the rolling motion is greater than that for the sliding motion. 12: What is the moment of inertia of an object that rolls without slipping down a 2.00-m-high incline starting from rest, and has a final velocity of 6.00 m/s? Express the moment of inertia as a multiple of$\boldsymbol{MR^2},$where$\boldsymbol{M}$is the mass of the object and$\boldsymbol{R}$is its radius. 13: Suppose a 200-kg motorcycle has two wheels like, the one described in Problem 10.15 and is heading toward a hill at a speed of 30.0 m/s. (a) How high can it coast up the hill, if you neglect friction? (b) How much energy is lost to friction if the motorcycle only gains an altitude of 35.0 m before coming to rest? 14: In softball, the pitcher throws with the arm fully extended (straight at the elbow). In a fast pitch the ball leaves the hand with a speed of 139 km/h. (a) Find the rotational kinetic energy of the pitcher’s arm given its moment of inertia is$\boldsymbol{0.720\textbf{ kg}\cdotp\textbf{m}^2}$and the ball leaves the hand at a distance of 0.600 m from the pivot at the shoulder. (b) What force did the muscles exert to cause the arm to rotate if their effective perpendicular lever arm is 4.00 cm and the ball is 0.156 kg? 15: Construct Your Own Problem Consider the work done by a spinning skater pulling her arms in to increase her rate of spin. Construct a problem in which you calculate the work done with a “force multiplied by distance” calculation and compare it to the skater’s increase in kinetic energy. ## Glossary work-energy theorem if one or more external forces act upon a rigid object, causing its kinetic energy to change from$\boldsymbol{\textbf{KE}_1}$to$\boldsymbol{\textbf{KE}_2},$then the work$\boldsymbol{W}$done by the net force is equal to the change in kinetic energy rotational kinetic energy the kinetic energy due to the rotation of an object. This is part of its total kinetic energy ### Solutions 1: Yes, rotational and translational kinetic energy are exact analogs. They both are the energy of motion involved with the coordinated (non-random) movement of mass relative to some reference frame. The only difference between rotational and translational kinetic energy is that translational is straight line motion while rotational is not. An example of both kinetic and translational kinetic energy is found in a bike tire while being ridden down a bike path. The rotational motion of the tire means it has rotational kinetic energy while the movement of the bike along the path means the tire also has translational kinetic energy. If you were to lift the front wheel of the bike and spin it while the bike is stationary, then the wheel would have only rotational kinetic energy relative to the Earth. Problems & Exercises 1: (a) 185 J (b) 0.0785 rev (c)$\boldsymbol{W=9.81\textbf{ N}}$ 3: (a)$\boldsymbol{2.57\times10^{29}\textbf{ J}}$ (b)$\boldsymbol{\textbf{KE}_{\textbf{rot}}=2.65\times10^{33}\textbf{ J}}$ 5: $\boldsymbol{\textbf{KE}_{\textbf{rot}}=434\textbf{ J}}$ 7: (a)$\boldsymbol{128\textbf{ rad/s}}$ (b)$\boldsymbol{19.9\textbf{ m}}$ 9: (a)$\boldsymbol{10.4\textbf{ rad/s}^2}$ (b)$\boldsymbol{\textbf{net }W=6.11\textbf{ J}}$ 14: (a) 1.49 kJ (b)$\boldsymbol{2.52\times10^4\textbf{ N}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285578727722168, "perplexity": 819.0819888990327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00319.warc.gz"}
https://sufficientlywise.org/2017/04/26/can-hamiltonian-mechanics-describe-dissipative-systems/
Can Hamiltonian mechanics describe dissipative systems? # Can Hamiltonian mechanics describe dissipative systems? TL;DR – Time dependent Hamiltonian mechanics is for conservative forces in non inertial frames, not for non-conservative forces. In a previous post we saw how Hamiltonian and Newtonian mechanics are different as the first cannot describe dissipative systems. Yet, if we allow the Hamiltonian to be time dependent, the system would appear to change its energy. Shouldn’t that allow us to describe dissipative systems? In fact, in the literature one can find time dependent Hamiltonians for damped harmonic oscillators. What is going on? Can Hamiltonian mechanics describe dissipative systems or not? If not, what does it mean when the Hamiltonian changes in time? As we’ll see, with time dependent systems it is easy to get confused. Changes of the time variable may introduce effects that are not usually discussed and therefore expected. We’ll work on three different systems: particle under air friction (i.e. linear drag), a system whose mass increases in time and a free particle trajectory under a non-linear time transformation. We’ll see that all three system have the same trajectories, so they look the same, but only the last two admit Hamiltonian formulation that matches the predictions for momentum and energy. In both those cases inertia is conserved but the mass changes, or appears to change in time, making the Hamiltonian time dependent. 1. Quick refresh Let’s go over some key definitions first. We’ll use Newton’s second law: \label{Fm_0a} F=m_0a as it applies to a constant mass $m_0$ and a force $F(x,v)$. We reserve the symbol $m$ for the case of variable mass. We’ll use Hamilton’s equations: \begin{aligned} \frac{dx}{dt} &= \frac{\partial H}{\partial p} \\ \frac{dp}{dt} &= – \frac{\partial H}{\partial x} \end{aligned} \label{Hamilton} where $H(x,p)$ is the Hamiltonian for the system. 2. A Hamiltonian for particle under linear drag Suppose we have a massive particle subject to air friction, modeled by linear drag. In this case $F=-bv$ and we have: \label{dragEq} m_0a+bv=0 This equation admits solutions of the type: \begin{aligned} x &= x_0+v_0\frac{m_0}{b}(1-e^{-\frac{b}{m_0}t}) \\ v &= v_0 e^{-\frac{b}{m_0}t} \\ a &= -\frac{b}{m_0} v_0 e^{-\frac{b}{m_0}t} \end{aligned} \label{dragSol} Consider now the Hamiltonian: \label{dragHam} H = \frac{1}{2} \frac{p^2}{m_0} e^{-\frac{b}{m_0}t} If we apply \eqref{Hamilton} we have: \begin{equation*} \begin{aligned} \frac{dx}{dt} &= \frac{\partial H}{\partial p} = \frac{p}{m_0} e^{-\frac{b}{m_0}t} \\ \frac{dp}{dt} &= – \frac{\partial H}{\partial x} = 0 \end{aligned} \end{equation*} Note how the expression for the velocity in \eqref{dragSol} matches the equation above. The solution therefore is \begin{aligned} p &= p_0 \equiv m_0 v_0 \\ x &= x_0+\frac{p_0}{m_0}\frac{m_0}{b}(1-e^{-\frac{b}{m_0}t}) \end{aligned} \label{dragHamSol} which looks equivalent to the solution for a massive particle under linear drag. But there is something not quite right. First of all, the momentum is constant. As the mass is constant and the velocity decreases, we would have expected the momentum to be: \label{dragExpMom} p = m_0 v = m_0 v_0 e^{-\frac{b}{m_0}t} For the energy, we would have expected: \begin{equation*} E = \frac{1}{2} m_0 v^2 = \frac{1}{2} m_0 v_0^2 e^{-2\frac{b}{m}t} \end{equation*} Since $p = p_0 = m_0 v_0$ we have: \begin{equation*} E = \frac{1}{2} \frac{p^2}{m_0} e^{-2\frac{b}{m_0}t} \end{equation*} which does not match \eqref{dragHam}. It seems that we have the right kinematics, the right trajectory in space, but it looks like the dynamics, the momentum and the energy, is off. Are we describing the correct system? 3. Variable mass system If we stare at \eqref{dragExpMom} we may get an idea: if the mass were not a constant, but increased in time at the same exact rate the velocity decreased, the momentum would be constant. Maybe the Hamiltonian in \eqref{dragHam} is really describing such a system? Suppose we have a massive body. Suppose all around little tiny specs precipitate on the body, making it more massive. Suppose they would come from all directions equally such that the total contribution to the momentum is zero. What would be the equation of motion for such system? As time passes, the body increases in mass but the momentum is unchanged. We can write: \label{accFma} \frac{d}{dt}(m v) = 0 = \frac{dm}{dt}v + m \frac{dv}{dt} Note already how similar it is similar to \eqref{dragEq}. As the mass changes, we can write it as: m = m_0 \hat{m}(t) so that $m_0$ is the initial mass and $\hat{m}(t)$ is the increase factor (i.e. equal to $1$ at $t_0$). We can rewrite the equation as: \begin{equation*} \frac{1}{\hat{m}} (\frac{dm}{dt}v + m \frac{dv}{dt}) = \frac{m_0}{m}\frac{dm}{dt}v + m_0 a \end{equation*} Suppose that the rate of accretion, as a fraction of the total mass, is constant. We can call $\lambda$ such constant and have: \label{accMass} \frac{dm}{dt} \frac{1}{m} = \lambda \equiv \frac{b}{m_0} We now have the same equation as \eqref{dragEq}. While the equation is the same, the physical system we are describing is totally different. They both will describe the same trajectory, they are kinematically equivalent, but the first is losing momentum as it slows down while the second is not. So let’s see how the dynamical variables compare. We can solve \eqref{accMass} to find: \label{accMassSol} m = m_0 e^{\frac{b}{m} t} We can combine this with the velocity from \eqref{dragSol} and have: p = m v = m_0 e^{\frac{b}{m} t} v_0 e^{-\frac{b}{m}t} = m_0 v_0 = p_0 The momentum is now constant and equal to the initial momentum. This is consistent with \eqref{dragHamSol}. How about the energy? We have: H = \frac{1}{2} \frac{p^2}{m} = \frac{1}{2} \frac{p^2}{m_0 e^{\frac{b}{m} t}} = \frac{1}{2} \frac{p^2}{m_0} e^{-\frac{b}{m} t} which is consistent with \eqref{dragHam}. The predictions, therefore, seems to be more in line with the variable mass system than with the massive particle under drag. We should be inclined to conclude that the Hamiltonian describes the former and not the latter. 4. Non-linear time transformations There is another interested case: one where system is actually closed (i.e. no exchange of matter and energy). Suppose we have a massive particle with no forces acting on it, and let $\hat{t}$ be the time as measured by an inertial observer: \begin{equation*} F=m_0\frac{d^2x}{d\hat{t}^2}=0 \end{equation*} Suppose we change the time variable without changing position: t=t(\hat{t}) The new variable only depends on the old time variable and the transformation is locally invertible. What happens to the equation of motion? We have: \begin{equation*} m_0\frac{d^2x}{d\hat{t}^2}= m_0 \frac{d}{d\hat{t}} \frac{dx}{d\hat{t}} =0 = m_0 \frac{dt}{d\hat{t}}\frac{d}{dt} (\frac{dt}{d\hat{t}}\frac{dx}{dt}) \end{equation*} Which can be rewritten as: \begin{aligned} m &= m_0 \frac{dt}{d\hat{t}} \equiv m_0 \hat{m}(t) \\ \frac{d}{dt} (m\hat{v}) &= 0 \end{aligned} \label{transMass} Note that is formally identical to \eqref{accFma}. What is going on? As the time transformation is non-linear, the body’s velocity and acceleration will appear to change. In fact, it may appear to be harder or easier to accelerate it: the effective mass changes in time according to the time transformation. This is just an effect of the new time variable: the mass is not actually increasing. What time transformation will give us the motion in \eqref{dragSol}? In the inertial frame, the solution will be x=x_0+v_0\hat{t} which means \hat{t} = \frac{m}{b}(1-e^{-\frac{b}{m}t}) Note that the new time variable cannot describe any moment in time after $\hat{t} = \frac{m}{b}$. The body may appear to stop, but it’s really the time variable that is converging to a particular instant of time. We can calculate the effective mass combining this with \eqref{transMass}. We have: \begin{align*} \hat{t} &= \frac{m}{b}(1-e^{-\frac{b}{m}t}) \\ \hat{t} &- \frac{m}{b} = -\frac{m}{b}e^{-\frac{b}{m}t} \\ (1&- \frac{b}{m}\hat{t}) = e^{-\frac{b}{m}t} \\ \log(1&- \frac{b}{m}\hat{t})= -\frac{b}{m}t \\ t&= -\frac{m}{b}\log(1- \frac{b}{m}\hat{t}) \\ \frac{dt}{d\hat{t}} &= -\frac{m}{b} \frac{1}{1- \frac{b}{m}\hat{t}} (-\frac{m}{b}) \\ &= \frac{1}{1- \frac{b}{m}\hat{t}} = \frac{1}{1- \frac{b}{m}\frac{m}{b}(1-e^{-\frac{b}{m}t})} \\ &=\frac{1}{1-1-e^{-\frac{b}{m}t}} =e^{\frac{b}{m}t} \\ m&=m_0\frac{dt}{d\hat{t}}=m_0e^{\frac{b}{m}t} \end{align*} which is the same expression as \eqref{accMassSol} In other words, the two cases of non-linear time transformation and variable mass are indistinguishable: they share the same kinematics and dynamics. Well, we could distinguish between them by looking at the motion of other objects around us: that would tell us if we are in an inertial frame or not. 5. Conclusion So, can Hamiltonian mechanics describe dissipative systems? What does the Hamiltonian \eqref{dragHam} describe? While the kinematics, the position and velocity, agrees with a particle under linear drag, the dynamics, the momentum and energy, does not. I would be inclined to rule it out. The two other systems match in both kinematics and dynamics. The key is they still conserve momentum, even its definition changes in time (i.e. the effective mass is changing). While the energy is changing in time, there is still something conserved by the system. Note that if we added gravity (i.e. general relativity), we might be able to notice that at some point the exponentially increasing mass turns into a black hole. The third case, though, would remain the same. Therefore, I would be inclined to consider the last one, the non-linear time transformation, to hold the most physical relevance. The physical insight is the following: a time dependent Hamiltonian system does not allow us to describe an arbitrary system that is changing energy in time. Only Newtonian mechanics can do that. Instead, it provides the correct coordinate independent generalization for conservative forces. Energy is not invariant under time transformations. If you studied special relativity, you would know that energy is the time component of a covector. Therefore, under time transformation, it changes like: H=\frac{d\hat{t}}{dt}\hat{H} Note that this is exactly what the Hamiltonian \eqref{dragHam} is: the free particle Hamiltonian multiplied by the time derivative. There is a way to rewrite Hamilton’s equation in a different but equivalent form, that is manifestly relativistic and introduces an invariant Hamiltonian. But that is a topic for another time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927539825439453, "perplexity": 738.5766631844064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00226.warc.gz"}
http://mathhelpforum.com/latex-help/50222-making-pdf.html
1. ## Making a PDF How would I go about making a pdf using latex? I have mathtype if that helps. 2. Originally Posted by RedBarchetta How would I go about making a pdf using latex? I have mathtype if that helps. I use it, and its great. This is a sample .pdf document created by TeXnic Center [its a part of my linear algebra assignment I have yet to complete ] However, this was the LaTeX code I used to generate the pdf: Code: \documentclass[8pt]{article} \usepackage{amsmath, amssymb, amsthm} \pdfpagewidth 8.5in \pdfpageheight 11in \setlength\topmargin{-1.5in} \setlength\textheight{9.25in} \setlength\textwidth{7in} \setlength\oddsidemargin{-.35in} \setlength\evensidemargin{-.75in} \begin{document} \begin{center} \small \textbf{MATH 243 : Linear Algebra I} \\ \small Chapter 2.1 \#1-7 odd, 11, 15 \end{center} \begin{flushleft} \begin{enumerate} \item[1. ] Find a row echelon form of each of the given matrices. Record the row operations you perform, using the notation for elementary row operations.\vskip 0.25pc \begin{enumerate} \item $A=\left[\begin{array}{ccc}-1&2&-5\\2&-1&6\\2&-2&7\end{array}\right]$\vskip 0.25pc $\begin{array}{c} \scriptsize_{2R_1+R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}-1&2&-5\\0&3&-4\\2&-2&7\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{2R_1+R_3\rightarrow R_2}\\ \left[\begin{array}{ccc}-1&2&-5\\0&3&-4\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-R_1\rightarrow R_1}\\ \left[\begin{array}{ccc}1&-2&5\\0&3&-4\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{\frac{1}{3}R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}1&-2&5\\0&1&-\frac{4}{3}\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-2R_2+R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}1&-2&5\\0&1&-\frac{4}{3}\\0&0&-\frac{1}{3}\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-3R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}1&-2&5\\0&1&-\frac{4}{3}\\0&0&1\end{array}\right] \end{array}\blacktriangleleft$\vskip 1.5pc \item $A=\left[\begin{array}{ccc}1&1&-1\\3&4&-1\\5&6&-3\\-2&-2&2\end{array}\right]$\vskip 0.25pc $\begin{array}{c} \scriptsize_{2R_1+R_4\rightarrow R_4}\\ \left[\begin{array}{ccc}1&1&-1\\3&4&-1\\5&6&-3\\0&0&0\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-3R_1+R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}1&1&-1\\0&1&2\\5&6&-3\\0&0&0\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-5R_1+R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}1&1&-1\\0&1&2\\0&1&2\\0&0&0\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-R_2+R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}1&1&-1\\0&1&2\\0&0&0\\0&0&0\end{array}\right] \end{array}\blacktriangleleft$ \end{enumerate}\vskip 1.5pc \item[3. ] Each of the given matrices is in row echelon form. Determine its reduced row echelon form. Record the row operations you perform, using the notation for elementary row operations.\vskip 0.5pc \begin{enumerate} \item $A=\left[\begin{array}{ccc}1&2&4\\0&1&-2\\0&0&1\end{array}\right]$\vskip 0.5pc $\begin{array}{c} \scriptsize_{-2R_2+R_1\rightarrow R_1}\\ \left[\begin{array}{ccc}1&0&8\\0&1&-2\\0&0&1\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{2R_3+R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}1&0&8\\0&1&0\\0&0&1\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-8R_3+R_1\rightarrow R_1}\\ \left[\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right] \end{array}\blacktriangleleft$\vskip 1.5pc \item $A=\left[\begin{array}{cccc}1&4&3&5\\0&0&1&-4\\0&0&0&1\\0&0&0&0\end{array}\right]$\vskip 0.5pc $\begin{array}{c} \scriptsize_{-3R_2+R_1\rightarrow R_1}\\ \left[\begin{array}{cccc}1&4&0&17\\0&0&1&-4\\0&0&0&1\\0&0&0&0\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{4R_3+R_2\rightarrow R_2}\\ \left[\begin{array}{cccc}1&4&0&17\\0&0&1&0\\0&0&0&1\\0&0&0&0\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-17R_3+R_1\rightarrow R_1}\\ \left[\begin{array}{cccc}1&4&0&0\\0&0&1&0\\0&0&0&1\\0&0&0&0\end{array}\right] \end{array}\blacktriangleleft$\vskip 4.5pc \end{enumerate} \item[5. ] Find the reduced row echelon form of each of the given matrices. Record the row operations you perform, using the notation for elementary row operations.\vskip 0.5pc \begin{enumerate} \item $A=\left[\begin{array}{ccc}-1&2&-5\\2&-1&6\\2&-2&7\end{array}\right]$\vskip 0.5pc $\begin{array}{c} \scriptsize_{2R_1+R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}-1&2&-5\\0&3&-4\\2&-2&7\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{2R_1+R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}-1&2&-5\\0&3&-4\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-R_1\rightarrow R_1}\\ \left[\begin{array}{ccc}1&-2&5\\0&3&-4\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{\frac{1}{3}R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}1&-2&5\\0&1&-\frac{4}{3}\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{2R_2+R_1\rightarrow R_1}\\ \left[\begin{array}{ccc}1&0&\frac{7}{3}\\0&1&-\frac{4}{3}\\0&2&-3\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-2R_2+R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}1&0&\frac{7}{3}\\0&1&-\frac{4}{3}\\0&0&-\frac{1}{3}\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-3R_3\rightarrow R_3}\\ \left[\begin{array}{ccc}1&0&\frac{7}{3}\\0&1&-\frac{4}{3}\\0&0&1\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{\frac{4}{3}R_3+R_2\rightarrow R_2}\\ \left[\begin{array}{ccc}1&0&\frac{7}{3}\\0&1&0\\0&0&1\end{array}\right] \end{array}\implies \begin{array}{c} \scriptsize_{-\frac{7}{3}R_3+R_1\rightarrow R_1}\\ \left[\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right] \end{array}\blacktriangleleft$\vskip 1.5pc \end{enumerate} \end{enumerate} \end{flushleft} \end{document} --Chris Attached Files 3. I downloaded both of those above. I've inserted the latex into a document. Where can you see a preview of what you've put in the document? I can't even to begin the set up with this thing..... "Enter the full path of the directory, where the executables(latex,tex,etc.) of your TeX-distribution are located:" Where and what is this? 4. Originally Posted by RedBarchetta I downloaded both of those above. I've inserted the latex into a document. Where can you see a preview of what you've put in the document? I can't even to begin the set up with this thing..... "Enter the full path of the directory, where the executables(latex,tex,etc.) of your TeX-distribution are located:" Where and what is this? Depending on what MiKTex you downloaded, the path of directory, where the executables are located is: C:\Program Files\MiKTex *.*\miktex\bin where *.* represents the version number. In my case, I have MiKTex 2.7, so my path was C:\Program Files\MiKTex 2.7\miktex\bin Then click next, and don't worry about the rest; just continue on until you're done with setup. When it finishes set up, go to the drop down that says LaTeX => DVI and change it to LaTeX => PDF. Once you start a new document, three buttons in the row that contains the drop down box become usable. From left to right, they are: - build document - view document - build and view document I use the last one, so it can compile and output the document in one action. However, if you keep using this button, you need to make sure that the preview (.pdf file) is closed. Otherwise, an error will come up and say they can't overwrite the file. Try to mess around with this a little bit. I'm still new to this, but it doesn't take much time until you catch on! --Chris
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9360032081604004, "perplexity": 2313.85044534284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711406217/warc/CC-MAIN-20130516133646-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.math.uic.edu/seminars/view_seminar?id=3175
## Departmental Colloquium Ben Weinkove Northwestern Geometric flows on complex surfaces Abstract: The Ricci flow has been a powerful tool in the study of three-dimensional manifolds. I will discuss the behavior of this flow on an important class of four-dimensional manifolds: the Kahler surfaces. In addition, I will discuss a flow which generalizes the Kahler-Ricci flow called the Chern-Ricci flow. This is a geometric flow on complex manifolds, recently introduced by M. Gill. I will describe some recent results and conjectures in the case of complex surfaces. Friday January 31, 2014 at 3:00 PM in SEO 636 UIC LAS MSCS > seminars >
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849449634552002, "perplexity": 696.0715509035924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00364.warc.gz"}
https://arxiv.org/abs/1305.2442
astro-ph.HE (what is this?) # Title: Ondas gravitacionales y objetos compactos (Gravitational waves and compact objects) Abstract: It is presented a brief review on gravitational waves (GWs). It is shown how the wave equation is obtained from Einstein's equations and how many and how are the polarization modes of these waves. It is discussed the reasons why GWs sources should be of astrophysical or cosmological origin. Thus, it is discussed what would be the most likely sources of GWs to be detected by the detectors of GWs currently in operation and those that should be operational in the future, emphasizing in particular the sources involving compact objects. The compact objects such as neutron stars, black holes and binary systems involving compact stars can be important sources of GWs. Last but not least, it is discussed the GWs astrophysics that is already possible to do, in particular involving the compact objects. Comments: Invited talk presented at "55a Reunion Anual de la Asociacion Argentina de Astronomia - AAA2012" (Mar del Plata - Argentina - September 2012) Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc) Cite as: arXiv:1305.2442 [astro-ph.HE] (or arXiv:1305.2442v1 [astro-ph.HE] for this version) ## Submission history From: Jose Carlos N. de Araujo [view email] [v1] Fri, 10 May 2013 21:17:17 GMT (243kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026091456413269, "perplexity": 2641.168753924217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160400.74/warc/CC-MAIN-20180924110050-20180924130450-00533.warc.gz"}
https://www.physicsforums.com/threads/help-newtons-3rd-law.245198/
# Help newton's 3rd law 1. Jul 15, 2008 ### honeycoh 1.) a 50kg block is being pulled at a constant speed along a horizontal surface by a horizontal force of 120N. Q: what is the coefficient of kenetic friction? 2.) a 100kg wooden crate is being pushed across a wooden floor with a horizontal force of 350N. Q: What is the net force acting on a crate? 3.) A cabinet weighing 400N is pulled along a horizontal floor at constant speed by a rope which makes an angle of 30 degrees with the floor. Q: what is the coefficient of sliding friction if the force on the rope is 220N? 2. Jul 15, 2008 ### aniketp Could you show what solving strategy you have applied here? Similar Discussions: Help newton's 3rd law
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630448222160339, "perplexity": 910.356222895687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423809.62/warc/CC-MAIN-20170721202430-20170721222430-00597.warc.gz"}
https://stats.stackexchange.com/questions/156109/overall-p-value-and-pairwise-p-values
Overall p-value and pairwise p-values? I have fitted a general linear model $$y=\beta_0+\beta_1x_1+\beta_2x_2+\beta_3x_3,$$ whose log likelihood is $L_u$. Now I wish to test if the coefficients are the same. • First, overall test: the log likelihood of the reduced model $y=\beta_0+\beta_1\cdot(x_1+x_2+x_3)$ is $L_r$. By likelihood ratio test, the full model is significantly better than the reduced one with $p=0.02$. • Next, $\beta_1=\beta_2$? The reduced model is $y=\beta_0+\beta_1\cdot(x_1+x_2)+\beta_2x_3$. The result is, $\beta_1$ is NOT different from $\beta_2$ with $p=0.15$. • Similarly, $\beta_1=\beta_3$? They are different with $p=0.007$. • Finally, $\beta_2=\beta_3$? They are NOT different with $p=0.12$. This is quite confusing to me, because I expect the overall $p$ to be smaller than $0.007$, since obviously $\beta_1=\beta_2=\beta_3$ is a much stricter criterion than $\beta_1=\beta_3$ (who generates $p=0.007$). That is, since I am already "$0.007$ confident" that $\beta_1=\beta_3$ does not hold, I should be "more confident" that $\beta_1=\beta_2=\beta_3$ does not hold. So my $p$ should go down. Am I testing them wrongly? Otherwise, where am I wrong in the reasoning above? • I assume x1, x2 and x3 are different levels of a similar factor, dummy coded. Then, I think, such surprising results could arise from differing number of independent replicates (= experimental units) in each levels. – Rodolphe Jun 16 '15 at 11:05 That is, since I am already "0.007 confident" that $\beta_1=\beta_3$ does not hold, I should be "more confident" that $\beta_1=\beta_2=\beta_3$ does not hold. So my p should go down Short answer : Your likelihood should go down. But here, the p-values do not measure the likelihood, but whether the release of some constraints provides a significant improvement on the likelihood. That's why it's not necessarily easier to reject $\beta_1=\beta_2=\beta_3$ than to reject $\beta_1=\beta_3$ because you need to show much better likelihood improvements in the most constrained model to prove that the release of 2 degrees of freedom to reach the full model was "worth it". Elaboration : Let's draw a graph of likelihood improvements. The only constraint to avoid a contradiction is that the likelihood improvements must be equal with the sum of likelihood improvement from the indirect path. That's how I found the p-value from the step 1 of the indirect path : $$\frac{L_3}{L_1}=\frac{L_3}{L_2}\times\frac{L_2}{L_1}$$ By likelihood improvements, I mean the log likelihood ratio represented by the $\Delta$Chi-squared, that's why they are summed in the graph. With this schema, one can discard the apparent contradiction because much of the likelihood improvement of the direct path comes from the release of only one degree of freedom ($\beta_1=\beta_3$). I would suggest two factors that can contribute to this pattern. • $\beta_2$ has a large confidence interval in the full model • $\beta_2$ is around the mean of $\beta_3$ and $\beta_1$ in the full model Under these conditions, there is not a big likelihood improvement by releasing one degree of freedom from $\beta_3=\beta_1=\beta_2$ model to the $\beta_3=\beta_1$ model because in the later model the estimation of $\beta_2$ can be close from the two other coefficients. From this analysis and the two other p-values you gave one could suggest that maybe $\frac{\beta_3+\beta_1}{2}=\beta_2$ can provide a good fit.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335215091705322, "perplexity": 440.2983325195102}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129517.82/warc/CC-MAIN-20200712015556-20200712045556-00362.warc.gz"}
https://chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/Kinetics/Modeling_Reaction_Kinetics/Reaction_Profiles/RK3._Activation_Barriers
# RK3. Activation Barriers Why do reactions take place at different rates? Why do some happen quickly, and others proceed very slowly? Why might the same reaction proceed at different rates under different conditions? There are a number of factors that influence reaction rates, but this article focuses on the activation barrier. An activation barrier is a sort of energetic hurdle that a reaction must bypass. Some reactions have higher hurdles and some have lower hurdles. It is easier to overcome lower hurdles, so reactions with low activation barriers can proceed more quickly than ones with higher activation barriers: • A low activation barrier allows a reaction to happen quickly. • A high activation barrier makes a reaction proceed more slowly. A reaction can be exergonic overall, but still have an activation barrier at the beginning. Even if the system decreases in energy by the end of the reaction, it generally experiences an initial increase in energy. • Even if a reaction gives off energy overall, energy must be added initially to get the reaction started. This situation is similar to investing in a business. A business generally requires a financial investment to get started. If the business is successful, it will eventually make products and pay money back to the investors. If the business is unable to make back its initial investment, it may fail. Reactions require an initial investment of energy. This energy may come from surrounding molecules or the environment in general. If the reaction is successful, it will proceed to make products and it will emit energy back to its surroundings. • It always "costs" a molecule energy to enter into a reaction; it "borrows" that energy from its environment. • That initial investment of energy may be "paid back" as the reaction proceeds. All reactions must overcome activation barriers in order to occur. The activation barrier is the sum of the energy that must be expended to get the reaction going. An activation barrier is often pictured as a hill the reactants must climb over during the reaction. Once, there, it can slide down the other side of the hill to become products. At the top of the hill, the molecule exists in what is called the "transition state." At the transition state, the structure is somewhere between its original form and the structure of the products. The type of diagram shown above is sometimes called a "reaction progress diagram." It shows energy changes in the system as a reaction proceeds. One or more activation barriers may exist along the reaction pathways, due to various elementary steps in the reaction. In order to understand more concretely the terms "reaction progress" and "transition state," consider a real reaction. Suppose a nucleophile, such as an acetylide ion, donates its electrons to an electrophilic carbonyl. The π bond breaks and an alkoxide ion is formed. "Reaction progress" refers to how far the reaction has proceeded. The transition state refers specifically to the highest energy point on the pathway from reactants to products.  It refers to the structure at that point, and the energy associated with that structure. In the following diagram, the term "reaction progress" has been replaced by an illustration that matches the status of the reaction with the corresponding point in the energy curve.  The structure in the square brackets is the transition state, corresponding to the maximum of the curve.  The "double dagger"  symbol indicates a transition state structure. The transition state is not a true chemical structure. It does not necessarily obey the rules of Lewis structures, because some new bonds have started to form and some old bonds have started to break; partial bonds have no place in a Lewis structure. Physically, the transition state structure cannot be isolated.  Because it sits at the top of an energy curve, the transition state tends to convert into something else. A change in either direction will lower its energy. The tendency is to proceed to lowest energy if possible. As soon as the transition state forms, it will either slide back into the original starting materials or slip forward into the final products. • The transition state is inherently a high-energy, unstable structure with a very short lifetime.  As soon as it is formed, it disappears. ##### Problem RK3.1. Draw the transition state for the following elementary reactions. More commonly, reaction progress diagrams are not drawn like the one above.  Instead, structures of reactants, transition states, and products are shown along the potential energy curve, as shown below. Not all reactions take place in one step. Sometimes there are one or more intermediate species. An intermediate differs from a transition state in that it has a finite lifetime. Although it is not as stable as the reactants or the products, an intermediate is stable enough that it does not immediately decay. Going either forward to products or back to reactants is energetically uphill. ##### Problem RK3.2. Draw reaction progress diagrams for the following reactions. Note that the reactions may be composed of more than one elementary step. ### Rate Constant The rate constant is a measurable parameter that can be used to form an idea about the activation barrier of a reaction. The rate constant for a reaction is related to how quickly the reaction proceeds.  A large rate constant corresponds to a very fast reaction.  A very small rate constant corresponds to a slow one. • The rate constant is an index of the speed of the reaction. Rate constants have different units depending on how the reaction proceeds; a reaction with a "first order" rate constant of 0.001 s-1 would run to completion in about an hour.  A reaction with a first order rate constant of  10-6 s-1 might take a couple of weeks. The rate constant gives direct insight into what is happening at the transition state, because it is based on the energy difference between the reactants and the transition state. The rate constant can be broken down into pieces. Mathematically, it is often expressed as $k = \left ( {RT \over Nh } \right ) e^{-\Delta {G \over RT}}$ In which R = the ideal gas constant, T = temperature, N = Avogadro's number, h = Planck's constant, and ΔG = the free energy of activation. The ideal gas constant, Planck's constant, and Avogadro's number are all typical constants used in modeling the behavior of molecules or large groups of molecules. The free energy of activation is essentially the energy requirement to get a quantity of molecules to undergo the reaction. ##### Problem RK3.3. For each of the following pairs, use < or > to indicate which quantity is larger. a)  e2 or e10 b)   e1/4 or e1/2 c)   e-3 or e-4 d)   e-1/2 or e-1/3 Note that k depends on two variables: • ΔG, or the energy required for the reaction • T, or the temperature of the surroundings, which is an index of how much energy is available The ratio of activation free energy to temperature compares the energy needed to the energy available. The more energy available compared to the energy needed, the lower this ratio becomes. As a result, the exponential part of the function becomes larger. That makes the rate constant bigger, and the reaction proceeds faster. Large groups of molecules behave like populations of anything else. They have averages, as well as outliers on the high and low end. However, the higher the temperature, the more energy a group of molecules will have on average. In the following figure, the blue curve represents the energy content in a population of molecules at low temperature.  The peak of the curve is near the average energy for this collection of molecules.  Some of the molecules have more energy than average (they are further to the right on the blue curve) and some have less (further to the left). The black slice through this curve indicates how much energy is needed to get over the activation barrier for a particular reaction.  Notice that, at low temperature, not that many molecules have enough energy to get over the barrier at any one time. The reaction will proceed very slowly. Nevertheless, more energy is available from the surroundings, and so after some time most of the molecules will have obtained enough energy so that they can eventually surpass the barrier. The yellow curve represents molecules at a higher temperature, and the red curve is a population at a higher temperature still.  As the temperature is increased, larger and larger fractions of the molecules have enough energy to surpass the activation barrier, and so the reaction proceeds more quickly. • the rate constant compares energy needed to energy available • based on that comparison, a specific fraction of the population will be able to react at a time ##### Free Energy of Activation The activation free energy is constant for a given reaction at a given temperature. However, at different temperatures, ΔG changes according to the following relation: $\Delta G = \Delta H - T \Delta S$ where ΔH = activation enthalpy and ΔS = activation entropy. The activation enthalpy is the element that corresponds most closely with the energy required for the reaction as described. The activation entropy concerns how the energy within the molecule must be redistributed for the reaction to occur. One of the major factors influencing energy distribution over the course of the reaction is molecular geometry. For example, suppose two molecules must collide for a reaction to take place.However, a reaction might not happen each time the molecules collide. The molecules may be pointing the wrong way, preventing the reaction from occurring. Atoms must often be oriented  in the proper place in order to form a new bond. When molecules are restricted to only certain orientations or geometries, they have fewer degrees of freedom. With fewer degrees of freedom, energy can be stored in fewer ways. As a result, there is often an entropy cost in initiating a reaction. On the other hand, a reaction might start in a different fashion, with one molecule breaking a bond and dividing into two species.  Because each individual piece can move independently from the other, the degrees of freedom increase.  Energy can be stored in more ways than they could be before the reaction started. As a result, although this reaction would still have an activation barrier, it may actually be lowered due to the entropy component. ##### Problem RK3.4. In the following drawings, one orientation of reactants is more likely to lead to the product shown.  Select which one will be most successful in each set and explain what is wrong with each of the others. ##### Problem RK3.5. Because the activation barrier depends partly on the energy needed to break bonds as the molecule heads into the transition state, comparative bond strength can be a useful factor in getting a qualitative feel for relative activation barriers. The metal-carbonyl (M-CO) bond strengths of the coordination complexes M(CO)6 have been estimated via photoacoustic calorimetry and are listed below, by metal. Cr:  27 kcal/mol    Mo:  32 kcal/mol    W:  33 kcal/mol 1. Based on that information, sketch qualitative activation barriers for the loss of a CO ligand from Cr(CO)6, Mo(CO)6 and W(CO)6. 2. Predict the relative rates for these three reactions (fastest?  slowest?). ##### Problem RK3.6. Comparing the strengths of bonds that will be broken in a reaction is often a good way to get a first estimate of relative activation barriers. 1. Use the following bond strengths to estimate the barriers to addition of a nucleophile (such as NaBH4) to the following double bonds:  C=O (180 kcal/mol); C=N  (147 kcal/mol); C=C  (145 kcal/mol).  Make a sketch of the three reaction progress diagrams. 2. In general, C=O bonds are the most reactive of these three groups toward electrophiles, followed by C=N bonds.  Are these relative barriers consistent with this observation? 3. What other factor(s) might be important in determining the barrier of the reaction? 4. Modify your reaction progress diagram to illustrate these other factors.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916382789611816, "perplexity": 981.0705512812398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00565.warc.gz"}
https://www.physicsforums.com/threads/interpretation-of-an-impulse-graph.716500/
# Interpretation of an impulse graph • Start date • #1 48 0 Which of the following is true if the mass is at rest at t=0 ? A. Velocity of the mass is decreasing btwn 10 and 20 seconds. B. The accel. of the mass is constant between 0 and 10 seconds C. At 40 seconds the mass is stationary. D. The distance trsveled by the mass bten 0 and 10 seconds is equal to that traveled btwn 10 and 20 seconds. The answer is C. I understand why I can rule out answer B. But I do not see why A and D are not true. From the graph, I see that force decreases during 10 < t < 20 seconds. Thus accel gets smaller but is still positive, which means that velocity should still be increasing. However what I do not understand is what the inflection point at t=10 (and t=30 for that matter) represents...I mean between 0-20 seconds, velocity is always increasing (according to the solution, but then what does this inflection point at t=10 seconds represent? This problem is driving me insane, plz help! :) • #2 Dick Homework Helper 26,260 619 View attachment 62919 Which of the following is true if the mass is at rest at t=0 ? A. Velocity of the mass is decreasing btwn 10 and 20 seconds. B. The accel. of the mass is constant between 0 and 10 seconds C. At 40 seconds the mass is stationary. D. The distance trsveled by the mass bten 0 and 10 seconds is equal to that traveled btwn 10 and 20 seconds. The answer is C. I understand why I can rule out answer B. But I do not see why A and D are not true. From the graph, I see that force decreases during 10 < t < 20 seconds. Thus accel gets smaller but is still positive, which means that velocity should still be increasing. However what I do not understand is what the inflection point at t=10 (and t=30 for that matter) represents...I mean between 0-20 seconds, velocity is always increasing (according to the solution, but then what does this inflection point at t=10 seconds represent? This problem is driving me insane, plz help! :) From what you've just said, you can already rule out A. The points at x=10 and x=30 are just the points of maximum acceleration and deceleration respectively. Now you just have to rule out D. I'm not sure what's driving you insane. • #3 48 0 So btwn t = 0 and 10 seconds the mass is accelerating, right? And from 10 to 20 sec it is still accelerating but acceleration is declining...so shouldnt answer A be true? • #4 Dick Homework Helper 26,260 619 So btwn t = 0 and 10 seconds the mass is accelerating, right? And from 10 to 20 sec it is still accelerating but acceleration is declining...so shouldnt answer A be true? If it's still accelerating that means velocity is still increasing, just more slowly. Declining acceleration doesn't mean that the velocity is decreasing. Just as you've already said. Acceleration and velocity are two different things. • #5 48 0 Wow im such a dope! Thanks for all of ur help!!! ))) • Last Post Replies 1 Views 1K • Last Post Replies 3 Views 921 • Last Post Replies 4 Views 3K • Last Post Replies 4 Views 3K • Last Post Replies 9 Views 2K • Last Post Replies 4 Views 714 • Last Post Replies 7 Views 2K • Last Post Replies 5 Views 891 • Last Post Replies 7 Views 2K • Last Post Replies 12 Views 3K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932705283164978, "perplexity": 1010.592334586116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00543.warc.gz"}
http://mathoverflow.net/questions/145687/compact-integral-operator
# Compact integral operator I have a question regarding compact integral operators on $L^{2}({\Omega})$ with $\Omega$ a bounded domain in $\mathbb{R^{n}}$ Suppose we are given $T$ from $L^{2}(\Omega)$ to $L^{2}(\Omega)$ as $Tf(x) = \int_a^b K(x,y)f(y)\,dy$ with $K(x,y) \in L^{2}(\Omega\times\Omega)$ and $T$ is compact. I would like to know how the dimension of the kernel of $T-I$ is bounded by $||K||_{L^{2}(\Omega\times\Omega)}^{2}$. In other word if $m = dim(N), N = ker(T-I)$, then I would like to have $m \leq ||K||_{L^{2}(\Omega\times\Omega)}^{2}$. Here's what I did so far. Since $T$ is compact, $m$ is finite, and so we can construct an orthonormal basis for $f_{i}, i = 1,\cdots, m$ for $N$. We also have $<f_{i}, f_{j}> = <Tf_{i}, Tf_{j}>$ and so $m=||f_1||^{2}+\cdots + ||f_{m}||^{2}\leq m||K||_{L^{2}(\Omega\times\Omega)}^{2}$. I want to get rid of the $m$ factor on the right. Any help much appreciated. - I think you are also assuming Hermitian symmetry $K(x,y)=\overline{K(y,x)}$ (or just $K(x,y)=K(y,x)$ if $K$ is real) otherwise the eigenfunctions need not to be othogonal. Then $K(x,y)=\sum_{j=1}^\infty\lambda_jf_j(x)f_j(y)$ and by Parseval $\|K\|_2^2= \sum_{j=1}^\infty|\lambda_j| ^2\ge m$. –  Pietro Majer Oct 24 '13 at 0:20 No symmetry is needed, Pietro. The Hilbert-Schmidt norm of $T$ is equal to the $L_2$ norm of $K$. –  Bill Johnson Oct 24 '13 at 0:51 I don't say more because this looks like a homework problem. –  Bill Johnson Oct 24 '13 at 0:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667874574661255, "perplexity": 101.64044310732191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829839.93/warc/CC-MAIN-20140820021349-00324-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/linear-algebra-final.75300/
# Linear Algebra final 1. May 11, 2005 ### Gale ok, so here's the deal. The final was a take home, and an intended "learning tool." meaning, 2 weeks ago he threw us a packet of notes, said class was over, and good luck with the final. I hate this prof... somehow he's well respected in the dept... anyways, he doesn't teach us at all, this whole semesmter has basically been self taught, but whatever... the final's not too hard... just... weird... i hate his tests... So, i'll write the questions, write what i have, and hopefully you all will help. Also, he wants us to type these all up, and i'm not sure how to go about that... i dunno how to type matrices in word... so any advice as to how to do that is appreciated as well... Q1) explain why, if a is an eigenvalue of the nxn matrix A, ie, a root of f(t)= lA-tIl there is an $$X \neq 0$$ in $$R_{n}$$ with AX= aX. A) is a is an eigenvector of A then by definition it has a corresponding eigenvector $$\xi$$, not equal to zero, where $$A\xi= a\xi$$ therefore $$X=\xi$$ and X is an eigenvector of A with a coresponding eigenvalue of a. (i'm not sure if that proves "exactly when" do i need to show the reverse or something???) Q2) Show that if U is an nxn matrix, then (UX,UY) = (X,Y) exactly when the columns of U form an orthagonal set. A) $$(UX,UY)= (UX)^T (UY) = X^T U^T UY = X^T (U^T U) Y= X^T Y = (X,Y)$$ where $$U^T U= I$$ because U is orthagonal, and therefore symmetric. Q3) Let C be an orthagonal nxn matrix (as in 2) and A an nxn matrix. Show that A is symmetric exactly when $$C^-1 AC$$ is symmetric. A) $$(C^{-1} AC)^T = (C^{-1} (AC))^T = (AC)^T (C^{-1})^T = C^T A^T (C^{-1})^T = C^{-1} A^T C$$ because $$C^T = C^{-1}$$ because U is orthagonal. therefore. $$(C^{-1} AC)^T$$ is symmetric when $$A= A^T$$, when A is symmetric.. Q4) Let A be an upper triangular matrix with all diagonal entries equal. Show that if A is not a diagonal matrix, then A cannot be diagonalized. A) A = aI + B... (i dunno what diagonal means... so i'm not really sure what to do here... help) Q5) Show that A is a symmetric nxn matrix and Y and X are eigenvectors of A belonging to different eigevalues, then (X,Y)=0 A) $$AX=\lambda X$$ and $$AY=\mu Y$$ where $$\lambda \neq \mu$$ $$\lambda (X,Y) = (\lambda X,Y) = (AX,Y) = (X,AY) = (X,\mu Y) = \mu (X,Y)$$ and since $$\mu \neq \lambda$$ then we must have $$(X,Y)=0$$ (ok, i pretty much copied that directly from those notes he gave us... not sure if its right... or if it is, why he'd put it on the final... the guy makes no sense... ) Q6) Let A be the matrix $$\left(\begin{array}{cccc}0&0&0&-1\\0&1&\sqrt{2}&0\\0&\sqrt{2}&1&0\\-1&0&0&-1\end{array}\right)$$ find an orthagonal matrix U so that $$U^{-1} AU$$ is a diagonal matrix. A) ok... this one took a page and a half of work just to find one eigenvector... i have to find 4 i believe. The first eigenvector i got was $$\left(\begin{array}{cccc}0\\1\\1\\0\end{array}\right)$$ if someone could check that... that'd be cool. After i find the eigenvectors i... umm... have no idea what i do actually... combine them into one matrix? is that U? not too sure.... anyways, if its necessary i'll show more of my work, i'm just not having fun will all this tex right now... Q7) Let A be the matrix $$\left(\begin{array}{ccc}1&0&-1\\0&2&0\\-1&0&1\end{array}\right)$$ i) find the distinct eigenvalues $$\lambda$$ of A and their multiplicities ii) Determine the dimensions of the eigenspaces $$N(A- \lambda I)$$ iii) find orthonormal bases of these eigenspaces iv) Combine these bases into one orthonormal basis B of $$R^3$$ and verify that the matrix of A relative to B is a diagonal matrix with entries the eigenvalues of A, each repeated as many times as its multiplicity A) ok $$\lambda_{1} = 0 \lambda_{2,3} = 2$$ my first eigenvector is $$\xi_{1} =\left(\begin{array}{cc}1\\0\\1\end{array}\right)$$ so the dimension of that is 1? the other two eigenvectors i'm not sure about. I get $$\left(\begin{array}{cc}-1&-1\\0&0\\1&1\end{array}\right)$$ i'm not even sure if thats the right notation... or if the numbers are right, cause i had an undefined variable. but i wrote it this way so it'd have a dimension of 2... cause our teacher said there's was one dim1 one dim2 after that, i'm not really sure what to do, because we never went over orthonormal anything... and i dunno how to find bases well... its a mess... i'll have to sit with my notes for a while in order to understand this at all but anyways, thanks for any help, or just checking my answers. And if you have an idea as to how i can type this all up, that'd be awesome. (maybe i can even print my work straight off pf? cause i dunno how else to type matrices....) ~gale~ Last edited: May 11, 2005 2. May 11, 2005 ### mathwonk my copy of word has a program called equation editor that has matrices. but i somehow feel constrained from helping you with your final. but in the first question you are confused as to what eh is asking. i.e. you are taking a different definition of eigenvalue from the one given in thew problem. ' probably the problem should ask: "show that any root of the characteristic polynomial is an eigenvalue." the way you are interporeting it it is asking the tautological question: " show that any eigenvalue is an eigenvalue." that is pointless. in question 2 you are wrong that an orthoigonal matrix is symmetric but your solution is correct otherwise, since Atrans A = I is true for an orthogonal matrix. but anyway, for this you need to nkow that a matric is "diagonal" if the only possible non zero entries are on the main diagonal, and this can be achieved if and only if each root of the characteristic polynomial r has algebraic multiplicity as a root, equal to the dimension of the nullspace of A-rI. it also sounds as if you need to read the book a little bit more, if you do not know this fundamental stuff. i am going to leave you to do that now. 3. May 11, 2005 ### mathwonk heres a t shirt i wore to class last week: (imagine it centered and in a fancy font) And God said: LET ALL SYMMETRIC MATRICES BE “ORTHOGONALLY DIAGONALIZABLE” (and vice versa) to wit: A = At IF and ONLY IF there is an ORTHONORMAL BASIS of EIGENVECTORS for A 4. May 11, 2005 ### Gale hmm well i realised my typo in problem two, so ya. problem one... i'll look at again, cause you were right, i misinterpreted, problem 4, i figured out what diagonal meant on my own. And i'm working on it. mostly though, i can't get 10 and 11. in problem 10 i keep getting zero vectors for eigenvectors. 11 i don't understand well, and will have to read more, yes. By the way, i don't have a book. He doesn't issue books for the class. He hands us his notes, which are all implicity written, with nearly no examples or plain english, and then in class all he does is rewrite them on the board. Its been a very, VERY hard class to learn anything from. but thanks for adding to my frustration... i'm well aware of my weak grasp of linear algebra, and i've done my best. grr... 5. May 11, 2005 ### mathwonk have you noticed you have a tendency to blame other people for your difficulties, first your teacher, now me? if you need a book, the one by ruslan sharipov mentioned in another thread here is free and excellent. the very short one on the following webpage is also free. http://www.math.uga.edu/~roy/ by the way 10 and 11 do not seem to be there. 6. May 11, 2005 ### Gale my professor sucks... you suck... it frustrates me... meh... anyways, by 10 and 11 i meant 6 and 7, they're 10 and 11 on my paper, but we only had to chose so many problems.... whatever, 6 and 7 are giving me problems. specifically 10. Those were the two "learning" problems. He didn't teach us anything about them, but figured he'd hand us the notes, and if we could figure them out, we'd learn.... whatever... the semester is bout over, i don't think i'll get a textbook now actually. thanks though... again... for the excellent help. i'll post my work for 4 a bit later. if anyone could figure out the eigenvectors for 10, or tell me if i'm doing 11 right, that'd be cool. thanks... 7. May 11, 2005 ### mathwonk have you noticed i am the only person helping you here? your people skills could use work. solve your own problems. Last edited: May 11, 2005 8. May 12, 2005 ### Gale ok ok, i'm sorry.... PMS.... but thanks. I've just been working on this stuff for too many hours and i think you were a bit unfair... anyways, it'd be a huge help if someone could show me how to get something decent for an answer to problem 10. I also worked more on 11, but i don't know if what i posted here was right to begin with... 9. May 12, 2005 ### SpaceTiger Staff Emeritus You are not the only one helping her, you're the only one helping her on PF. Gale is doing her best with what she has. I think your accusations are extremely unfair and your posts unnecessarily condescending. If you don't want to help, just don't respond. 10. May 12, 2005 ### Don Aman If some matrix is invertible, then it returns a nonzero vector for every vector of any basis. Conversely, if a matrix is singular, then there is some basis for which there is some vector which is sent to zero by the matrix. Can you see how to use that to answer the question? there are some problems with your answer. Firstly, I assume it was a typo when you said "a is an eigenvector". In fact, a is an eigenvalue. We don't know if this guy has an eigenvector yet (remember that there may be more eigenvalues than eigenvectors, so we can't be sure that the matrix has any eigenvectors. that's what we have to prove). secondly, the definition of eigenvalue that we're using is: root of the secular polynomial, not solution to the eigenvalue equation. so we do not have an eigenvector "by definition" as you claim. "exactly when" means you have to show two things: first show that if (UX,UY)=(X,Y), then the columns of U are orthogonal (spelling!). then you have to show that if the columns of U form an orthogonal set, then (UX,UY)=(X,Y). you have shown that U^TU=1 implies (UX,UY)=(X,Y), which is true, and useful, but not quite the answer that we need. recognize that the i-th column vector of U is given by U(ei) where ei is the i-th vector in the (orthonormal) basis. and orthonormal bases satisfy (ei,ej)=1 if i=j, 0 if i !=j remember to do the implication in both directions Yes, very good. Remember that "exactly when" means that you have to prove two implications, although in this case, I think every step of the argument you made is reversible. still, make sure you know what is meant by that. Diagonal means having only entries on the diagonal of the matrix. Like, the matrix 1 0 0 0 2 0 0 0 3 is diagonal, but the matrix 1 2 3 0 4 0 0 0 5 is not, because that 2 and 3 aren't on the diagonal. If a matrix can be diagonalized (meaning there is some change of basis which brings the matrix to diagonal form), then the diagonal entries will be the eigenvalues of the matrix. Use that fact, along with the fact that the determinant of an upper triangular matrix is the product of its diagonal elements, and you can get this one. yeah Hoh boy. Diagonalizing matrices can be a bunch of calculation. Tell you what I'll do: I'll plug this guy into mathematica, and see what the answer is and tell you if you're right. You'll still have to do the calculation yourself. By the way, that calculation is as follows: 1. get the secular polynomilal det(M-x)=0, and find its roots. 2. for each root, find the nullity of M-xi 3. choose an orthonormal basis for each null space. 4. those vectors go in the columns of U 5. and the eigenvalues go in the diagonal of the resulting matrix and the answer is... yes! the vector you list is indeed one of the eigenvectors, corresponding to eigenvalue 1+sqrt(2). 3 more to go! Again? didn't we just diagonalize a matrix in the last problem? Ugh. Well this one is only 3x3, so it won't be as bad, so I guess I'll do it by hand, so I can check you step by step. OK, actually, it wasn't bad. lots of zeros. yes. yes that notation is ****ed up. is that supposed to be a 3x2 matrix or is it supposed to be 2 vectors? anyway, the column of that thing is the correct answer for one of the eigenvectors corresponding to eigenvalue 2. there is another one. what is an undefined variable? What happened? You should definitely be able to get 2 eigenvectors here. once you have the 3 eigenvectors, it'll be easy to make them orthonormal, they're almost orthonormal already. and then you're done. type it in Latex! msword sucks for math documents. 11. May 12, 2005 ### mathwonk space tiger, when you ask for free advice, sometimes you get true advice that you don't want. especially likely if you start out trashing people and continue by saying they "suck". i believe discontinuing such behavior would indeed be helpful. by the way it is your remarks about gale that are condescending. i think gale is capable of better. Last edited: May 12, 2005 12. May 12, 2005 ### Gale So here's what i've redone: Q1) umm.... not really. if A has an eigenvalue, a, does that mean A's invertible? i guess if thats true, then i sorta see what you're saying... I'm looking back through my notes... it says this "Let T:V-->V be a linear transformation. A non-zero vector X in V is called an eigenvector of T with eigenvalue $$\lambda$$ ( scalar) if $$TX=\lambda X$$ The eigen value $$\lambda$$ may be zero, but by definition, an eigenvector is not zero. We say that $$\lambda$$ is an eigenvalue of T if there is a non-zero vextor X in C with $$TX=\lambda X$$" So thats where i think i was initially going off of... trying to find something like what you're talking about.... i find this... "If $$P(t) = a_{0} + a_{1}t+...+a_{r}t^r$$ is a polynomial in the variable t, and X is an eigenvector of T with eigenvalue $$\lambda$$, then X is an eigenvector of $$P(t) = a_{0}I + a_{1}T+...+a_{r}T^r$$ with eigenvalue $$P(\lambda)$$, and if T is invertible then $$\lambda\neq0$$ and X is an eigenvector of $$t^{-1}$$ with eigenvalue $$\frac{1}{\lambda}$$ the non zero vectors in the nullspace $$N(T-\lambda I)$$ are the eigenvectors of T with the eigenvalue $$\lambda$$. But we know that $$N(T-\lambda I)$$ contains a non-zero vector exactly when $$|T-\lambda I|=0$$ But we also know that if V has a dimension n, then $$|T-\lambda I|$$ is a polynomial of degree n in the unknown $$\lambda$$, called the characteristic polynomial of T." i'm gonna sit and try to make sense of that for a while. i'm really not sure what i'm sposed to do. Q2 i'm still thinking about question one... i kinda made some sense of what you said... but i'll get back to you on that. Q3 i think i'm all set on this one, i redid everything backwards, we're good. Q4 ok, here i go.... Since A is upper trianglular, you can write it as A=aI+B, if B is a strictly upper triangular martix. We know that $$C^{-1}AC$$ will be a diagonal matrix. so, $$D=C^{-1}AC=aI+C^{-1}BC$$ (ok, so i don't actually know why we know that fist thing has to be diagonal, and i'm not sure why the aI doesn't get multiplied by C and its inverse... but my prof said this was all right...) so, then $$D-aI=C^{-1}BC$$. A diagonal matrix summed with another diagonal matrix will still be a diagonal matrix, therefore, $$C^{-1}BC$$. must also be diagonal. now, because B is strictly upper triangular we know that $$B^n=0$$ therefore $$C^{-1}B^nC=(C^{-1}BC)^n=0=D-aI$$ therefore $$D=aI=C^{-1}AC$$ ...ok so now this is supposed to prove something? i think i lost myself.... Q5 we're good... Q6 Ok, so here are all the eigenvectors... $$\left(\begin{array}{c}0\\1\\1\\0\end{array}\right) \left(\begin{array}{c}0\\1\\-1\\0\end{array}\right) \left(\begin{array}{c}1\\0\\0\\\frac{-1+\sqrt{5}}{2}\end{array}\right)\left(\begin{array}{c}1\\0\\0\\\frac{-1-\sqrt{5}}{2}\end{array}\right)$$ ok, so now i make them unit vectors, yes? umm... i can't find anything about doing that in my notes... so i'll assume its just pythagoras. so my new results... (this gets messy, but i did it so you could check my work, more easily...) $$\left(\begin{array}{c}0\\\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}\\0\end{array}\right \left(\begin{array}{c}0\\\frac{1}{\sqrt{2}}\\\frac{-1}{\sqrt{2}}\\0\end{array}\right) \left(\begin{array}{c}\frac{1}{\sqrt{1+(\frac{-1+\sqrt{5}}{2})^2}}\\0\\0\\\frac{\frac{-1+\sqrt{5}}{2}}{\sqrt{1+(\frac{-1+\sqrt{5}}{2})^2}}\end{array}\right) \left(\begin{array}{c}\frac{1}{\sqrt{1+(\frac{-1-\sqrt{5}}{2})^2}}\\0\\0\\\frac{\frac{-1-\sqrt{5}}{2}}{\sqrt{1+(\frac{-1-\sqrt{5}}{2})^2}}\end{array}\right)$$ OOOk... so i have all that now, i just put them all in a matrix together, call it U, and i'm done? Q7 ok, so ya, the way i did that last matrix was wrong. I wasn't sure how to find the last eigenvector... i guess i just take that last one, and find one orthagonal to it? So then the last vector i had, and the new one i'll get form together to create an eigenspace of dimension 2. ALRIGHT... hmm.. well, i still don't think i understand parts iii) and iv) then. i take my eigenvectors and make them unit vectors, and thats the orthonormal bases? i guess thats part iii) and then i combine those... and i do what? i dunno what it means by combine them, and i really don't get the rest of part iv). he wouldn't even explain it in class cause he says "thats where it really tests your understanding..." ugh. i obviously don't understand, but i'm trying and augh. Last edited: May 12, 2005 13. May 12, 2005 ### Gale By the way, thanks a ton Don Aman, really, thanks. and thanks spacetiger. Math wonk... if you want to help, that'd be awesome. you were definetly condescending, and i'm admittedly already very frustrated. it certainly doesn't help me to be antagonistic. i dunno what you mean by "i think gale is capable of better," but i think i'm trying pretty hard actually. ANYways, how do i use Latex? isn't that an online thing? heh, awesome, mines not even showing up in my last post... arg Last edited: May 12, 2005 14. May 12, 2005 ### shmoe Try a package like MikTex. I'm hesitant to offer much help on a take home exam, so I'll just a couple of things: Q1)"if A has an eigenvalue, a, does that mean A's invertible?" No it doesn't. You should be able to find a non-invertible matrix that has an eigenvalue. Think about how a non-zero vector in $$N(T-\lambda I)$$ relates to this question (you should also know why there is one). Q4) "(ok, so i don't actually know why we know that fist thing has to be diagonal, and i'm not sure why the aI doesn't get multiplied by C and its inverse... but my prof said this was all right...)" It does, perform the multiplication $$C^{-1}(aI+B)C$$ carefully. What are the possibilities for the diagonal entries of D? Combined with $$D=C^{-1}AC$$ what does this tell you about A? 15. May 12, 2005 ### Gale Oh, and btw, my professor encourages us to get help, we just have to list our resources. I'm afraid i won't list off everyone's name who helps me, but i'll write down physicsforums. He doesn't teach us like.... anything... of course its expected that we go get help. Normally, i work with my partner, but she's been really busy, and we haven't been able to meet up, so we're going solo... but only help me as much is comfortable... i'm happy with whatever i can get. Anyways... that first problem is supposed to be simple, and it probably is... but its just not clicking for me at all... question 4: i get that multiplication thing... god i was dense... that was simple... the scalar a commutes, as does I... somehow, i just didn't get that. the diags of D are all a. i assume it tells me that A must be diagonal... seeing as thats what i'm trying to prove... but i don't quite get it. lets see $$CDC^{-1}=A$$ if D's a bunch of zeros off the diagonal... they'll stay zeros through the multiplication, and therefore A must be diagonal to begin with? 16. May 12, 2005 ### shmoe You know how to do this. Think about what D looks like and how this is related to the first quote in this post. 17. May 12, 2005 ### Gale ok, i have this written in my notes for diff eq, i just realized its the answer for problem one... (ok ok, i didn't realize it so much as someone was quickly explaining problem one to me, and it sounded too familiar...) i can't quite make sense of it though... $$\lambda$$ is an eigenvalue with eigenvector $$\xi\neq0$$ if $$A\xi=\lambda\xi$$ to find eigenvalue, use the fact that matrix is not invertible if the determinent equals 0. we have $$A\xi-\lambda\xa=0$$ which goes to $$(A-\lambda I)\xi=0$$ so if $$|A-\lambda I|\neq0$$ then $$A-\lambdaI$$ is invertible. So then, $$\xi(A-\lambda I)(A-\lambda I)^{-1}=0(A-\lambda I)^{-1}$$ and... $$I\xi=0 ; \xi=0$$ which contradicts our initial statement so, $$|A-\lambda I|=0$$ ok, i follow the math just fine. what i've said is... um... if there's a value a such that the determinant is zero... then there must be a non zero vector \xi or X...? i'm just not quite getting it... 18. May 12, 2005 ### The Reverend BigBoa Gale, here is another decent free "e-book" on linear algebra. http://home.comcast.net/~bigboa/linear.htm [Broken] Last edited by a moderator: May 2, 2017 19. May 14, 2005 ### HallsofIvy Staff Emeritus ". I hate this prof... somehow he's well respected in the dept... anyways, he doesn't teach us at all, this whole semesmter has basically been self taught" Actually, I suspect it is not at all unusual for a professor who expects students to think for themselves and recognizes that the most important thing he can teach them is how to learn on their own is both well respected by his colleagues and hated by his student (or at least by the poorer students)! 20. May 14, 2005 ### Gale aye... i don't want to get into a convo about my professor... but look... i'm a smart girl, and well enough equipped to study things on my own. i pay to have a teacher though... and i expected one. Instead, i got a verbal copy of my notes. that's just not teaching. mostly, i think he's just way too old to still be a professor. if we asked a question, he would just get really scattered and confused, and then just rewrite the notes on the board. anyways, for anyone who cares. I finished up the exam already. i'm not sure how well i did cause i had trouble typing it. but thanks for any help. Similar Discussions: Linear Algebra final
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583502173423767, "perplexity": 682.5832927402963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689845.76/warc/CC-MAIN-20170924025415-20170924045415-00095.warc.gz"}
http://mathhelpforum.com/geometry/128566-sss-sas.html
# Thread: sss or sas ? 1. ## sss or sas ? In the triangle ABC AC = BC = 20 inches and AB = 10 inches. Circle is inscribed in the triangle, what is the radius of the circle...see attached...#71 Attached Files 2. Where are you stuck? 3. well i'm pretty sure the one radius pointing down is the midpoint of AB so those 2 segments are 5 inches, then i bisected the 3 angles in the circle to figure out that Ar and Br should be 5 inches each. i dont know what to do next 4. Originally Posted by igottaquestion In the triangle ABC AC = BC = 20 inches and AB = 10 inches. Circle is inscribed in the triangle, what is the radius of the circle...see attached...#71 sketch lines from each vertex to the incenter. note that the area of the three triangles formed sums to the area of the large triangle ... $\frac{1}{2}(20)r + \frac{1}{2}(20)r + \frac{1}{2}(10)r = \frac{1}{2}(10)h$ $25r = 5h$ $r = \frac{h}{5}$ find the height of the triangle from the vertex angle to the base and you can find r 5. ok so the height is 5 sq rt of 15, so r=sq rt of 15? 6. That's it! 7. Well...let equal sides = a and base = b radius incircle = bSQRT(4a^2 - b^2) / [2(2a + b)] OR (if you prefer the SQRT in denominator!): radius incircle = b(2a - b) / [2SQRT(4a^2 - b^2)] No need to calculate height. 8. Originally Posted by igottaquestion In the triangle ABC AC = BC = 20 inches and AB = 10 inches. Circle is inscribed in the triangle, what is the radius of the circle...see attached...#71 1. You are dealing with 2 right triangles. (see attachment) 2. Grey triangle: $h^2 + 5^2 = 20^2$ 3. The right triangle at the top of the grey triangle: $r^2+15^2 = (h-r)^2$ 4. Calculate h from 2. and plug in this term into the equation in 3. Solve for r. Spoiler: You should come out with $r = \sqrt{15}$ Second attempt: The 2 right triangles are similar. So use proportions: $\dfrac r{15} = \dfrac5h$ First calculate h, then r using the proportion. Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290911316871643, "perplexity": 1118.7713067504496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982964275.89/warc/CC-MAIN-20160823200924-00139-ip-10-153-172-175.ec2.internal.warc.gz"}
https://access.openupresources.org/curricula/our6-8math-v1/8/students/7/3.html
# Lesson 3: Powers of Powers of 10 Let's look at powers of powers of 10. ## 3.1: Big Cube What is the volume of a giant cube that measures 10,000 km on each side? ## 3.2: Taking Powers of Powers of 10 1. Complete the table to explore patterns in the exponents when raising a power of 10 to a power. You may skip a single box in the table, but if you do, be prepared to explain why you skipped it. Row 1 expression expanded single power of 10 Row 2 $(10^3)^2$ $(10 \boldcdot 10 \boldcdot 10)(10 \boldcdot 10 \boldcdot 10)$ $10^6$ Row 3 $(10^2)^5$ $(10 \boldcdot 10)(10 \boldcdot 10)(10 \boldcdot 10)(10 \boldcdot 10)(10 \boldcdot 10)$ Row 4   $(10 \boldcdot 10 \boldcdot 10)(10 \boldcdot 10 \boldcdot 10)(10 \boldcdot 10 \boldcdot 10)(10 \boldcdot 10 \boldcdot 10)$ Row 5 $(10^4)^2$ Row 6 $(10^8)^{11}$ 2. If you chose to skip one entry in the table, which entry did you skip? Why? 1. Use the patterns you found in the table to rewrite $\left(10^m\right)^n$ as an equivalent expression with a single exponent, like $10^{\boxed{\phantom{3}}}$. 2. If you took the amount of oil consumed in 2 months in 2013 worldwide, you could make a cube of oil that measures $10^3$ meters on each side. How many cubic meters of oil is this? Do you think this would be enough to fill a pond, a lake, or an ocean? ## 3.3: How Do the Rules Work? Andre and Elena want to write $10^2 \boldcdot 10^2 \boldcdot 10^2$ with a single exponent. • Andre says, “When you multiply powers with the same base, it just means you add the exponents, so $10^2 \boldcdot 10^2 \boldcdot 10^2 = 10^{2+2+2} = 10^6$.” • Elena says, “$10^2$ is multiplied by itself 3 times, so $10^2 \boldcdot 10^2 \boldcdot 10^2 = (10^2)^3 = 10^{2+3} = 10^5$.” Do you agree with either of them? Explain your reasoning. ## Summary In this lesson, we developed a rule for taking a power of 10 to another power: Taking a power of 10 and raising it to another power is the same as multiplying the exponents. See what happens when raising $10^4$ to the power of 3. $$\left(10^4\right)^3 =10^4 \boldcdot 10^4 \boldcdot 10^4 = 10^{12}$$ This works for any power of powers of 10. For example, $\left(10^{6}\right)^{11} = 10^{66}$. This is another rule that will make it easier to work with and make sense of expressions with exponents.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174285888671875, "perplexity": 694.7227294274076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00598.warc.gz"}
http://export.arxiv.org/abs/2211.12795
math.FA (what is this?) # Title: Kernels of operators on certain Banach spaces associated with almost disjoint families Abstract: Given an infinite set $\Gamma$ and an almost disjoint family $\mathcal{A}$ on $\Gamma$, let $Y_{\mathcal{A}}$ denote the closed subspace of $\ell_\infty(\Gamma)$ spanned by the indicator functions $1_{\bigcap_{j=1}^n A_j}$ for $n\in\mathbb{N}$ and $A_1,\ldots,A_n\in\mathcal{A}$. We show that if $\mathcal{A}$ has cardinality greater than $\Gamma$, then $Y_{\mathcal{A}}$ contains closed subspaces which cannot be realised as the kernels of any bounded operators $Y_{\mathcal{A}}\rightarrow \ell_{\infty}(\Gamma)$. Consequently the spaces $\ell_{\infty}(\Gamma)$, for any infinite set $\Gamma$, and $C_0(K_{\mathcal{A}})$, where $\mathcal{A}$ is an uncountable almost disjoint family on $\mathbb{N}$ and $K_{\mathcal{A}}$ the locally compact Mr\'{o}wka space associated with $\mathcal{A}$, contain closed subspaces which do not arise as the kernels of any bounded operators on them. Comments: 10 pages. Submitted Subjects: Functional Analysis (math.FA) MSC classes: Primary 46B26, 46E15, 47B01, Secondary 18A30, 47B38 Cite as: arXiv:2211.12795 [math.FA] (or arXiv:2211.12795v1 [math.FA] for this version) ## Submission history From: Bence Horváth [view email] [v1] Wed, 23 Nov 2022 09:20:24 GMT (12kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241562843322754, "perplexity": 525.1581776643895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00639.warc.gz"}
https://excellup.com/ClassEight/matheight/numberexone1.aspx
# Rational Numbers ## NCERT Exercise 1.1 (Part 2) Question 3: Verify that -(-x)=x for: (i) x=(11)/(15) Solution: Given, x=(11)/(15) The additive inverse of x=(11)/(15) is -x=(-11)/(15) Similarly, the additive inverse of (-11)/(15) is (11)/(15) Or, -((-11)/(15))=(11)/(15) Or, -(-x)=x proved (ii) x=-(13)/(17) Solution: Given, x=-(13)/(17) The additive inverse of x=-(13)/(17) is -x=(13)/(17) Similarly, the additive inverse of (13)/(17) is -(13)/(17) Or, -(13)/(17)+(13)/(17)=0 Or, -(-x)=x proved Question 4: Find the multiplicative inverse of the following (i) -13 Solution: We know that multiplicative inverse of a number is reciprocal of the number. Thus, multiplicative inverse of -13 is equal to (1)/(-13) (ii) (-13)/(19) Solution: We know that multiplicative inverse of a number is reciprocal of the number. Thus, multiplicative inverse of (-13)/(19) is equal to (19)/(-13) (iii) 1/5 Solution: We know that multiplicative inverse of a number is reciprocal of the number. Thus, multiplicative inverse of 1/5 is equal to 5 (iv) -5/8xx(-3)/(7) Solution: Given, -5/8xx(-3)/(7) =((-5)xx(-3))/(8xx7)=(15)/(56) We know that multiplicative inverse of a number is reciprocal of the number. Thus, multiplicative inverse of (15)/(56) is equal to (56)/(15) (v) -1xx(-2)/(5) Solution: Given, -1xx(-2)/(5)=2/5 We know that multiplicative inverse of a number is reciprocal of the number. Thus, multiplicative inverse of 2/5 is equal to 5/2 (vi) -1 Solution: We know that multiplicative inverse of a number is reciprocal of the number. Thus, multiplicative inverse of -1 is equal to (1)/(-1) or -1 Alternate Method: The product of a number and its multiplicative inverse is equal to 1 Here, -1xx-1=1 Thus, multiplicative inverse of -1 is -1 Question 5: Name the property under multiplication used in each of the following. (i) (-4)/(5)xx1=1xx(-4)/(5)=-4/5 Solution: Here, 1 is the multiplicative identity. Thus, property of multiplicative identity is used. (ii) -(13)/(17)xx(-2)/(7)=(-2)/(7)xx(-13)/(17) Solution: Here, multiplicative commutativity is used. (iii) (-19)/(29)xx(29)/(-19)=1 Solution: Since, the product of given numbers is 1, so (29)/(-19) is the multiplicative inverse of (-19)/(29) Thus, property of multiplicative inverse is used. Question 6: Multiply (6)/(13) by the reciprocal of (-7)/(16) Solution: Reciprocal of (-7)/(16) is (16)/(-7) So, (6)/(13)xx(16)/(-7) =(6xx16)/(13xx(-7))=(96)/(-91) Question 7: Tell what property allows you to compute 1/3xx(6xx4/3) as (1/3xx6)xx4/3 Solution: The property of associativity Question 8: Is 8/9 the multiplicative inverse of -1\1/8? Why or why not? Solution: -1\1/8=-7/8 Since, 8/9xx(-7)/(8)=-7/9≠1 So, -1\1/8 is not the multiplicative inverse of 8/9 Question 9: Is 0.3 the multiplicative inverse of 3\1/3 Why or why not? Solution: 0.3=(3)/(10) The multiplicative inverse of (3)/(10) is (10)/(3)=3\1/3 Thus, 3\1/3 is the multiplicative inverse of 0.3. Question 10: Write (i) The rational number that does not have a reciprocal. Solution: 0 (zero) is the rational number which does not have a reciprocal. (ii) The rational numbers that are equal to their reciprocals. Solution: 1 and – 1 are the rational numbers which are equal to their reciprocals. (iii) The rational number that is equal to its negative. Solution: 0 (zero) is the rational number which is equal to its negative. Question 11: Fill in the blanks: (i) Zero has __________ reciprocal. Solution: no (ii) The numbers ________ and ________ are their own reciprocals. Solution: 1 and – 1 (iii) The reciprocal of – 5 is _____________. Solution: (1)/(-5) (iv) Reciprocal of 1/x where x≠0 is ______________. Solution: x` (v) The product of two rational numbers is always a _____________. Solution: rational number (vi) The reciprocal of a positive rational number is ____________ Solution: positive
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798402786254883, "perplexity": 2648.5466519565725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00189.warc.gz"}
http://stackexchange.moderatenerd.com/
## A simple but nifty inequality Problem: Given $f:[0,1] \to \mathbb{R}$ is integrable over $[0,1]$, and that $$\int_0^1 dx \, f(x) = \int_0^1 dx \, x f(x) = 1$$ show that $$\int_0^1 dx \, f(x)^2 \ge 4$$ The way to the solution here is not trivial. I started by always recognizing that, with integral inequalities, it never hurts to start with the integral of something squared is greater than or equal to zero. But…what? Well, given the integral constraints above, we can guess what we need to do. We begin with $$\int_0^1 dx \left [f(x) – (a x+b) \right ]^2 \ge 0$$ for any $a,b \in \mathbb{R}$. Expand the integrals and use the info provided to get that $$\int_0^1 dx \, f(x)^2 \ge 2 a+2 b – \frac13 a^2-a b-b^2$$ We then maximize the expression on the right. Let the expression on the RHS be $g(a,b)$. We may maximize by taking derivatives of $g(a,b)$ wrt $a$ and $b$ and setting them equal to zero. We then get a system of equations $$2 a + 3 b = 6$$ $$a+2 b=2$$ which implies that $a=6$ and $b=-2$. Note that $g_{aa} g_{bb} – g_{ab}^2 = \frac13 \gt 0$ and $g_{aa} = -\frac23 \lt 0$, so the critical point we found is indeed a maximum. It is then easy to verify that $g(6,-2)=4$. The assertion is proven. ### Another crazy integral, another clinic on using the Residue Theorem The problem posed here is to show that $$\int_0^{\infty} dx \, \frac{\coth^2{x}-1}{\displaystyle (\coth{x}-x)^2+\frac{\pi^2}{4}} = \frac{4}{5}$$ The OP actually seemed to know what he was doing but could not get the correct residue that would allow him to get the stated result. In showing a link to Wolfram Alpha, the OP revealed that he […] ### Another integral that Mathematica cannot do The integral to evaluate is $$\int_0^{\infty} dx \frac{\sin{\left (\pi x^2 \right )}}{\sinh^2{(\pi x)}} \cosh{(\pi x)}$$ Given the trig functions in the integrand, it makes sense to use the residue theorem based on a complex integral around a rectangular contour. As has been my experience with these integrals, the integrand of the complex integral will not […] ### Computing an integral over an absolute value using Cauchy’s theorem The problem is to compute the following integral: $$\int_{-1}^1 dx \frac{|x-y|^{\alpha}}{(1-x^2)^{(1+\alpha)/2}}$$ I will show how to compute this integral using Cauchy’s theorem. It was remarked that it should not be possible to use Cauchy’s theorem, as Cauchy’s theorem only applies to analytic functions, and an absolute value certainly does not qualify. True. Nevertheless, for the […] ### Integral with two branch cuts II The problem here is to compute $$\int_0^\infty \log(1+tx)t^{-p-1}dt$$ where $p\in(0,1)$ and $x>0$. This is a great problem for contour integration. Just tricky enough to be really interesting. What makes it interesting is that there are two functions in the integrand needing their own separate branch cuts. One must keep in mind that each function only […] ### Integral with two branch cuts The problem is to evaluate the following integral: $$\int_{-1}^1 dx \frac{\log{(x+a)}}{(x+b) \sqrt{1-x^2}}$$ where $a \gt 1$ and $|b| \lt 1$. It should be obvious to those who spend time around these integrals that this integral does not converge as stated. However, we only have a simple pole at $x=-b$ so that we can compute […]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887651205062866, "perplexity": 155.70519975564193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320270.12/warc/CC-MAIN-20170624170517-20170624190517-00204.warc.gz"}
https://www.physicsforums.com/threads/finding-the-potential.149500/
# Finding the Potential 1. Dec 29, 2006 ### Reshma This one is from Liboff(p6.8) Given the wavefunction: $$\psi(x, t) = A exp[i(ax - bt)]$$ What is the Potential field V(x) in which the particle is moving? If the momentum of the particle is measured, what value is found(in terms of a & b)? If the energy is measured, what value is found? My Work: $$\psi(x, t) = A exp[i(ax - bt)]$$ I took the partial derivatives wrt to t and x: $$\frac{\partial \psi}{\partial t} = -(ib)\psi$$ $$\frac{\partial^2 \psi}{\partial x^2} = -a^2\psi$$ Time dependent Schrodinger's equation is: $$i\hbar \frac{\partial \psi}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2 \psi}{\partial x^2} + V(x)\psi$$ Substituting the above values in this equation: $$\hbar b \psi = \frac{\hbar^2 a^2}{2m}\psi + V(x)\psi$$ Dividing throughout by $\psi$ and rearranging, I get the potential field as: $$V(x) = \hbar\left(b - \frac{\hbar a^2}{2m}\right)$$ Am I going right? Before I can proceed furthur... Last edited: Dec 29, 2006 2. Dec 29, 2006 ### neutrino :surprised The calculation seems correct, though. 3. Dec 29, 2006 ### Reshma Sorry, I realised my mistake but I got disconnected before I could correct it. I've corrected it now. 4. Dec 29, 2006 ### Reshma Now I have to get an expression for the momentum. $$p = \hbar k$$ $$V(x) = {1\over 2}kx^2$$ Equating this to the other expression of V(x) would mean an x2 term in the momentum expression. I am confused over this. $$k = {{2\hbar}\over x^2} \left(b - \frac{\hbar a^2}{2m}\right)$$ 5. Dec 29, 2006 ### Gokul43201 Staff Emeritus Didn't you just find out that V(x) is a constant potential? Look at your answer for V(x) above. Why are you assuming that V(x) should now be a harmonic potential? Given a wavefunction, how do you calculate the values of observables? 6. Dec 30, 2006 ### Reshma Use the operator relation? $$\hat p = -i\hbar\frac{\partial}{\partial x}$$ $$\hat H = -\frac{\hbar^2}{2m} \frac{\partial^2 }{\partial x^2}$$ 7. Dec 30, 2006 ### marlon Where does the above harmonic V(x) come from ? Your original way of working is correct. I didn't check all the algebra though. marlon 8. Dec 30, 2006 ### Reshma Sorry! I made an erroneous assumption. I think using the operator relation is the correct technique. Am I right? 9. Dec 30, 2006 ### marlon yes marlon 10. Dec 30, 2006 ### Gokul43201 Staff Emeritus This is correct. This isn't right. The RHS is simply the kinetic energy, not the total energy. 11. Dec 30, 2006 ### Worzo For the first part of working out the potential, you have to employ the TDSE, this is true. However, you should recognise that this is the wavefunction for a free particle, in the form: [psi] = Aexp[i(kx - wt)], where A = normalisation factor, p=[hbar]k, E = [hbar]w. Hence in this example, p = [hbar]a, E = [hbar]b. You may use the operator relations p[hat] = -i[hbar]d/dx, and H[hat] = i[hbar]d/dt to show this. Notice also how V = [hbar]w - ([hbar]k)2/2m = [hbar]w - p2/2m So V = E - KE i.e. Potential Energy = Total Energy - Kinetic Energy Which agrees with Total Energy = Kinetic Energy + Potential Energy Last edited: Dec 30, 2006 12. Dec 31, 2006 ### Reshma Thanks for the responses Gokul, Marlon and Worzo! Things are making better sense now. Similar Discussions: Finding the Potential
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912907063961029, "perplexity": 2998.999992027341}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00400.warc.gz"}
https://astarmathsandphysics.com/university-physics-notes/quantum-mechanics/1652-the-paul-exclusion-principle.html?tmpl=component&print=1&page=
## The Paul Exclusion Principle The Pauli exclusion principle is a quantum mechanical principle formulated by the Austrian physicist Wolfgang Pauli in 1925. In its simplest form for electrons in a single atom, it states that no two electrons can in an atom have the same four quantum numbers, that is, ifare the same, ms must be different such that the electrons have opposite spins. More generally, no two identical fermions (particles with half-integer spin) may occupy the same quantum state simultaneously. A more rigorous statement of this principle is that, for two identical fermions, the total wave function is anti-symmetric. The Pauli exclusion principle is one of the most important principles in physics, mainly because the three types of particles from which the ordinary atom is made—electrons, protons, and neutrons—are all subject to it; consequently, all material particles exhibit space-occupying behavior. The Pauli exclusion principle underpins many of the characteristic properties of matter, from the large-scale stability of matter, to the existence of the periodic table of the elements. Fermions, particles with antisymmetric wave functions, obey the Pauli exclusion principle. Apart from the familiar electron, proton and neutron, these include neutrinos and quarks (from which protons and neutrons are made), as well as some atoms like helium-3. All fermions possess &quot;half-integer spin&quot;, meaning that they possess an intrinsic angular momentum whose value istimes a half-integer (1/2, 3/2, 5/2, etc.). In the theory of quantum mechanics, fermions are described by &quot;antisymmetric states&quot; such that if any two are interchanged, a phase change in the wavefunction ofoccurs. is antisymmetric sine Particles with integer spin have a symmetric wave function and are called bosons; in contrast to fermions, they may share the same quantum states. Examples of bosons include the photon, the Cooper pairs responsible for superconductivity, and the W and Z bosons. is antisymmetric sine
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8086683750152588, "perplexity": 458.7804737191404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591575.49/warc/CC-MAIN-20180720080634-20180720100634-00636.warc.gz"}
https://www.aanda.org/articles/aa/full_html/2009/07/aa11396-08/aa11396-08.html
Free Access Issue A&A Volume 495, Number 1, February III 2009 L1 - L4 Letters https://doi.org/10.1051/0004-6361:200811396 20 January 2009 LETTER TO THE EDITOR ## Phase-resolved spectroscopy of the accreting millisecond X-ray pulsar SAX J1808.4-3658 during the 2008 outburst R. Cornelisse1 - P. D'Avanzo2 - T. Muñoz-Darias1 - S. Campana2 - J. Casares1 - P. A. Charles3,4 - D. Steeghs5,6 - G. Israel7 - L. Stella7 1 - Instituto de Astrofisica de Canarias, Calle via Lactea S/N, 3805 La Laguna, Spain 2 - INAF - Osservatorio Astronomico di Brera, via E. Bianchi 46, 23807 Merate, Italy 3 - South Africa Astronomical Observatory, PO Box 9, Observatory 7935, South Africa 4 - School of Physics and Astronomy, University of Southampton, Highfield, Southampton SO17 1BJ, UK 5 - Department of Physics, University of Warwick, Coventry, CV4 7AL, UK 6 - Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA 7 - INAF-Osservatorio Astronomico di Roma, via Frascati 33, 00040 Monteporzio Catone (Rome), Italy Received 21 November 2008 / Accepted 15 December 2008 Abstract Aims. We obtained phase-resolved spectroscopy of the accreting millisecond X-ray pulsar SAX J1808.4-3658 during its outburst in 2008 to find a signature of the donor star, constrain its radial velocity semi-amplitude (K2), and derive estimates for the pulsar mass. Methods. Using Doppler images of the Bowen region, we find a significant (8) compact spot at a position where the donor star is expected. If this is a signature of the donor star, we measure  km s-1 (1 confidence), which represents a strict lower limit to K2. Also, the Doppler map of He II shows the characteristic signature of the accretion disc, and there is a hint of enhanced emission that may be a result of tidal distortions in the accretion disc that are expected in very low mass-ratio interacting binaries. Results. The lower limit on K2 leads to a lower limit on the mass function of f(M1 0.10 . Applying the maximum K-correction gives 228 < K2 < 322 km s-1 and a mass ratio of 0.051 < q < 0.072. Conclusions. Despite the limited S/N of the data, we were able to detect a signature of the donor star in SAX J1808.4-3658, although future observations during a new outburst are still needed to confirm this. If the derived is correct, the largest uncertainty in determining of the mass of the neutron star in SAX J1808.4-3658 using dynamical studies lies with the poorly known inclination. Key words: accretion, accretion disks - X-rays: binaries - stars: individual: SAX J1808.4-3658 ## 1 Introduction Low-mass X-ray binaries (LMXBs) are systems in which a compact object (a neutron star or a black hole) is accreting matter, via Roche lobe overflow, from a low-mass (<1 ) star. Some LMXBs exhibit sporadic outburst activity but for most of their time remain in a state of low-level activity (White et al. 1984); we refer to these systems as transients. In April 1998, a coherent 2.49 ms X-ray pulsation was discovered with the Rossi X-ray Timing Explorer (RXTE) satellite in the transient SAX J1808.4-3658 (Wijnands & van der Klis 1998; Chakrabarty & Morgan 1998). This was the first detection of an accreting millisecond X-ray pulsar (AMXP). Seven more of these systems have been discovered since then, all of which are transients with orbital periods in the range between 40 min and 4.3 h and spin frequencies from 1.7 to 5.4 ms. These findings directly confirmed evolutionary models that link the neutron stars of LMXBs to those of millisecond radio pulsars via the spinning up of the neutron star due to accretion during their LMXB phase (e.g. Wijnands 2006). To date, six episodes of activity have been detected from SAX J1808.4-3658 with a 2-3 year recurrence cycle. During the 1998 outburst, a detailed analysis of the coherent timing behaviour showed that the neutron star was in a tight binary system with a 2.01 h orbital period (Chakrabarty & Morgan 1998; Hartman et al. 2008). The mass function derived from X-ray data ( ) and the requirement that the companion fills its Roche lobe led to the conclusion that it must be a rather low-mass star, possibly a brown dwarf (Chakrabarty & Morgan 1998; Bildsten & Chakrabarty 2001). Figure 1: Average flux-calibrated spectrum of SAX J1808.4-3658. We have labelled the most important features that are present. The strong absorption features around 3940 and 3980 Å are due to diffuse interstellar bands. Open with DEXTER We present here optical spectroscopy of SAX J1808.4-3658 obtained during the 2008 outburst. One of the key issues in dynamical studies is to measure the radial velocity of the companion star (K2) and use this value to constrain the optical mass function of the system. During quiescence, these goals can be achieved by tracing the absorption features originating in the photosphere of the companion star. However, the intrinsic faintness of the low-mass companion stars in AMXPs makes such an analysis very difficult. To overcome this problem, Steeghs & Casares (2002) have shown that during phases of high mass-accretion rates, Bowen-blend lines emitted by the irradiated face of the companion star can be used. A precise measurement of K2represents the only way to determine the optical mass function of the system and ultimately constrain the neutron star mass. ## 2 Observations and data reduction Optical spectroscopic observations of SAX J1808.4-3658 were carried out on 27 September 2008 with the ESO Very Large Telescope (VLT), using the 1200B grism on FORS1 with a slit width of 0.7 arcsec. We obtained 16 spectra of 360 s integration each, which corresponds to one orbital period. The seeing during the observations was in the range . Image reduction was carried out following standard procedures: subtraction of an averaged bias frame and division by a normalised flat frame. The extraction of the spectra was performed with the ESO-MIDAS software package. Wavelength and flux calibration of the spectra were achieved using a helium-argon lamp and by observing spectrophotometric standard stars. Our final reduced spectra have a wavelength range from 3600-5000 Å, a dispersion of 0.72 Å pixel-1, and resolution of R=2200. Cross correlation of the spectral lines and the Doppler tomograms were obtained using the MOLLY and DOPPLER packages developed by Tom Marsh. ## 3 Data analysis We present the average spectrum of SAX J1808.4-3658 in Fig. 1. The spectrum is dominated by strong Balmer lines in absorption, which are due to the optically thick disc in the high state and are a typical signature of a low-to-intermediate inclination system. The He II 4686 and the Bowen complex (at 4630-4660) are also clearly detected as emission features, and we have indicated the most important lines. For other bright LMXBs, narrow components in the Bowen emission have been reported that are thought to arise on the irradiated surface of the donor star (e.g. Steeghs & Casares 2002; Casares et al. 2006; Cornelisse et al. 2007; see also Cornelisse et al. 2008, for an overview), and here we attempt to find similar features in SAX J1808.4-3658. To calculate the orbital phase for each spectrum, we used the recent ephemeris by Hartman et al. (2008), but added 0.25 orbital phase to their phase zero so that it represents inferior conjunction of the secondary. Unfortunately, the individual spectra do not have high enough S/N to identify the narrow features, so we must resort to the technique of Doppler tomography (Marsh & Horne 1988). This technique uses all the spectra simultaneously to probe the structure of the accretion disc and identify compact emission features from specific locations in the binary system. However, for this technique to work, an estimate of the systemic velocity of the system, , is crucial. We do want to note that, by changing the phase zero of the accurate Hartman et al. (2008) ephemeris, any potential donor star feature in the map must now lie along the positive y-axis in the Doppler tomogram. Thus finding a significant feature there will give strong support to an emission site located on the irradiated donor star. To find we started by applying the double-Gaussian technique of Schneider & Young (1980) to He II 4686. Since the wings of the emission line should trace the inner-accretion disc, it should give us an estimate of not only the already known radial velocity of the compact object (Chakrabarty & Morgan 1998), but also . Using a Gaussian band pass with FWHM of 400 km s-1 and separations between 500 and 1800 km s-1 in steps of 50 km s-1, we find that, between a separation of 1400 and 1600 km s-1, our values for K1 are close to the one obtained by Chakrabarty & Morgan (1998), while at larger separations we are reaching the end of the emission line (see Fig. 2). Also in this range, our fitted orbital phase zero () is close to 0.5, further suggesting that we are tracing the radial velocities of a region close to the neutron star, while the systemic velocity is stable around -50 km s-1. Despite the large errors on our fits, we do think this test already gives a good first estimate of around -50 km s-1. Figure 2: The derived fit parameters of the radial velocity curve of the He II emission as traced by a double Gaussian with separation a. The free parameters are the orbital phase zero , compared to the ephemeris of Hartman et al. (2008), its radial velocity K1, and its systemic velocity . The dotted line in the top panel indicates the expected phase zero for the compact object. Open with DEXTER Another test to obtain is to create Doppler maps for He II  . Contrary to the Bowen region (which is very complex due to the presence of many different lines), He II is a single line that is usually a good tracer of the accretion disc (see e.g. Cornelisse et al. 2007; Casares et al. 2006). We searched for between -160 and 0 km s-1 in steps of 20 km s-1, and all the maps show the expected accretion disc structure (see Fig. 3). We must unfortunately conclude that He II is not very sensitive to  and only suggests a range between -160 and 0 km s-1. We note that the maps are dominated by an emission feature in the top-left quarter of the map, which we interpret as the gas-stream impact point. We also note that further downstream there is enhanced emission, which might be due to matter streaming along the edge of the disc as was observed, for example in EXO 0748-676 (Pearson et al. 2006). Finally we note that there is some enhanced emission in the top-right corner, which might be due to strong tidal interaction (see below). Cornelisse et al. (2008) have shown that the strongest narrow component in the Bowen emission is usually N III 4641. Our next step was therefore to create Doppler maps of the Bowen region for between 0 and -120 km s-1 (again in steps of 20 km s-1) including only this line. Only when  was between -80 and -20 km s-1 was a clear spot present and centred on the x=0axis. For this range we estimated the peak value and FWHM of the spot as a function of , and in Fig. 4 we show how the ratio of these values change. Around  km s-1(1 confidence), the FWHM/peak value reaches a minimum, suggesting that here the spot is most compact, and this is the value we adopt for the systemic velocity. Furthermore, we also noted that, in the range from -35 to -65 km s-1, the velocity centroid of the spot was stable between Vy=240-260 km s-1. To decrease the noise present in the Bowen Doppler map, we included the most important other lines that are most often present in other LMXBs (N III 4634 and C III 4647/4650), and present this map in Fig. 3. To estimate the significance of the compact spot, we measured the standard deviation of the brightness of the pixels in the background. We find that the central pixels of the compact spot are 18 above the background, and even 8above the second most prominent feature in the map, namely the one at (-150, -200) in the Bowen map of Fig. 3 (for which it is unclear whether it is real or an artifact of the tomogram). This strongly suggests that the compact spot is real. Figure 3: Doppler maps of He II 4686 ( top) and the Bowen complex ( bottom). Indicated on both maps is the Roche lobe, gas stream leaving the L1 point, and the Keplerian velocity along the stream for the assumptions q=0.059 and K2=275 km s-1. Open with DEXTER Finally, to optimise our estimate for , we created average spectra in the rest frame of the donor star, changing in steps of 2 km s-1 within our error range. We find that N III 4640 is most pronounced for  km s-1(1 confidence), and adopt this as our final value, and in Fig. 5 show the final average spectrum in the rest-frame of the donor star. We do note that this value is lower than the 300 km s-1 obtained by Ramsay et al. (2008) from the same dataset, since they provide no errors, we cannot tell if this difference is significant. However, as stated above, the location of the spot remains within the quoted error range for a range of assumed  velocities. To search for variability in the Bowen lines as a function of orbital phase, we created an average spectrum in the rest-frame of the donor using only spectra taken between orbital phases 0.25 and 0.75, and another corresponding to phases 0.75 and 1.25. The strength of N III 4640 did not change between these spectra, but it is unclear if this is real (suggesting that the inclination is low) or due to the limited S/N of the dataset. ## 4 Discussion We have presented phase-resolved spectroscopy of the accreting millisecond X-ray pulsar SAX J1808.4-3658, and detected a compact feature in the Doppler map of the Bowen complex. Although the S/N of the data is limited, this spot is the most stable and significant feature for a wide range of  velocities. Furthermore, thanks to the very accurate ephemeris (Hartman et al. 2008), the spot is at a position where the donor star is expected, and we conclude that it is real. Therefore, following detections of a donor star signature in other X-ray binaries (e.g. Steeghs & Casares 2002; Casares et al. 2006; Cornelisse et al. 2007; see Cornelisse et al. 2008, for an overview), we also identify this feature as being produced on the irradiated surface of the donor star. We note that in Fig. 5 most peaks in the Bowen region appear to line up with known N III and C III lines (e.g. Steeghs & Casares 2002) when using our derived values for  and  . Since the donor star surface must have a lower velocity than the centre of mass, the observed  km s-1(1 confidence) is a lower limit on the true K2 velocity. However, it still gives us a strict lower limit on the mass function of f(M) = M1sin3i/(1 + q)2  0.10 , where q is the binary mass ratio M2/M1 and i the inclination of the system. We can further constrain the mass function by applying the so-called K-correction (Muñoz-Darias et al. 2005) and using the fact that K1=16.32 km s-1 (Chakrabarty & Morgan 1998). The largest K-correction possible is when we assume that there is no accretion disc and almost all radiation is produced in the L1point. Applying the polynomials by Muñoz-Darias et al. (2005) for  km s-1 gives  km s-1, which should be independent of the inclination of the system. This gives conservative estimates of 228<K2< 322 km s-1 and 0.051 < q < 0.072, which we used to create the Roche lobe and gas streams on the Bowen Doppler map in Fig. 3. We note that our obtained mass ratio is rather extreme and would suggest that tidal interaction in SAX J1808.4-3658 is important enough to produce a precessing accretion disc and thereby a superhump (see e.g. O'Donoghue & Charles 1996). This might be the explanation of the enhanced emission in the top-right quarter of the He II map (Fig. 3/top), which was for example also observed in LMC X-2 (Cornelisse et al. 2007). Such a superhump should be moving through the accretion disc on the precession time scale, and therefore changes position in the Doppler map over time. Unfortunately, since we only have one orbit of data we cannot test this, and future observations will be needed to see if the spot is long-lived and moves, in order to confirm the presence of the superhump. Figure 4: The ratio of the FWHM over the peak value for the compact spot in the Bowen map (using only N III 4661) as a function of the systemic velocity (). Open with DEXTER Despite our wide range of K2 we nonetheless will review the implications for the mass of the neutron star. First of all we can improve our estimate of K2 by taking the results by Meyer & Meyer-Hofmeister (1982) into account. They analysed the effects of X-ray heating on the accretion disc and found that there is a minimal disc opening angle of 6, even in the absence of irradiation. Using this value to estimate the K-correction, the polynomials by Muñoz-Darias (2005) suggest that K2  310 km s-1, which is still comparable to the maximum correction possible. Deloye et al. (2008) used photometry of SAX J1808.4-3658 in quiescence to constrain the inclination between 36 and 67 degrees. Using these extreme values for their inclination and 228 < K2 < 322 km s-1 leads to a neutron star mass between 0.15 and 1.58 . Although these values are not very constraining for the neutron star mass, they favour a mass near the canonical value rather than a massive neutron star. We also note that our values are lower than the >1.8  estimated by Deloye et al. (2008, for a 10% error in the distance estimate), but since this corresponds to a 1.7 difference (only taking our errors into account), we conclude that this disagreement is marginal. ## 5 Conclusions The observations presented here provide evidence that a signature of the irradiated donor star in SAX J1808.4-3658 is detected. Clearly a similar experiment, but at higher S/N and spectral resolution, must be carried out again during a future outburst to obtain more than a single orbital period of data and resolve the narrow lines. This will not only allow us to unambiguously claim the presence of narrow components in the spectra of SAX J1808.4-3658, but also to measure the rotational broadening of the narrow components to further constrain K2 via the relation in Wade & Horne (2003). With these data we have shown a promising way toward constraining K2, and in combination with more quiescent data, we should be able to better constrain the inclination, thereby obtaining the mass of the neutron star in SAX J1808.4-3658. Figure 5: Blow-up of the Bowen region for the average spectrum ( top) and the average in the rest-frame of the donor star ( bottom). The bottom spectrum clearly shows the narrow components. Indicated are the most important N III (4634/4640) and C  III (4647) lines. Open with DEXTER Acknowledgements We cordially thank the director of the European Southern Observatory for granting Director's Discretionary Time (ID 281.D-5060(A). We would like to thank the referee, Craig Heinke,for the careful and helpful comments which have improved this paper. R.C. acknowledges a Ramon y Cajal fellowship (RYC-2007-01046). R.C. acknowledges Katrien Uytterhoeven for useful discussions of different analysis techniques. P.D.A. and S.C. thank S. Covino for useful discussions. D.S. acknowledges an STFC Advanced Fellowship. J.C. acknowledges support by the Spanish MCYT GRANT AYA2007-66887. Partially funded by the Spanish MEC under the Consolider-Ingenio 2010 Program grant CSD2006-00070: First Science with the GTC. ## Footnotes ... outburst Based on observations made with ESO Telescopes at the Paranal Observatory under programme ID 281.D-5060(A). ... ESO-MIDAS http://www.eso.org/projects/esomidas/ ... Marsh http://deneb.astro.warwick.ac.uk/phsaap/software/ ## All Figures Figure 1: Average flux-calibrated spectrum of SAX J1808.4-3658. We have labelled the most important features that are present. The strong absorption features around 3940 and 3980 Å are due to diffuse interstellar bands. Open with DEXTER In the text Figure 2: The derived fit parameters of the radial velocity curve of the He II emission as traced by a double Gaussian with separation a. The free parameters are the orbital phase zero , compared to the ephemeris of Hartman et al. (2008), its radial velocity K1, and its systemic velocity . The dotted line in the top panel indicates the expected phase zero for the compact object. Open with DEXTER In the text Figure 3: Doppler maps of He II 4686 ( top) and the Bowen complex ( bottom). Indicated on both maps is the Roche lobe, gas stream leaving the L1 point, and the Keplerian velocity along the stream for the assumptions q=0.059 and K2=275 km s-1. Open with DEXTER In the text Figure 4: The ratio of the FWHM over the peak value for the compact spot in the Bowen map (using only N III 4661) as a function of the systemic velocity (). Open with DEXTER In the text Figure 5: Blow-up of the Bowen region for the average spectrum ( top) and the average in the rest-frame of the donor star ( bottom). The bottom spectrum clearly shows the narrow components. Indicated are the most important N III (4634/4640) and C  III (4647) lines. Open with DEXTER In the text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232395887374878, "perplexity": 1445.603866051092}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827137.61/warc/CC-MAIN-20181215222234-20181216004234-00560.warc.gz"}
http://mathhelpforum.com/advanced-algebra/161309-isomorphisms.html
1. Isomorphisms (a) Let S : U -> V and T : V -> W be linear maps. Show that Ker(TS) =S^-1(Ker T). (b) Let S : V -> W be a surjective linear map and M a subspace of W. Show that V/S^-1(M)Isomorphisms W/M. Hint: Apply part (a) to S : V-> W and Q : W ->W/M. 2. Originally Posted by mathbeginner (a) Let S : U -> V and T : V -> W be linear maps. Show that Ker(TS) =S^-1(Ker T). (b) Let S : V -> W be a surjective linear map and M a subspace of W. Show that V/S^-1(M)Isomorphisms W/M. Hint: Apply part (a) to S : V-> W and Q : W ->W/M. Note that if $u\in \ker TS$ then $TSu=0$ or said differenlty $T(Su)=0$ thus $Su\in\ker T$ and so $u\in S^{-1}\left(\ker T\right)$. The other directly is just as easy. For b), just take the hint. (assuming that $Q$ is the canonical map $Q:V\toV/M:v\mapsto v+M$) We know that $QS:V\to W/M$ is a surjective linear map and by the FIT $V/\left(\ker QS\right)\cong V/M$ but noticing that $\ker QS=S^{-1}(\ker Q)=S^{-1}(M)$ we see by our previous problem that $V/(S^{-1}(M)}=V/\left(\ker QS\right)\cong V/M$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728076457977295, "perplexity": 591.5664351332878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00006.warc.gz"}
http://mathhelpforum.com/geometry/39147-how-show-u-1-u.html
# Math Help - how to show -u = (-1) u ? 1. ## how to show -u = (-1) u ? how to show -u = (-1) u ? in this question, u stands for a vector. i am not sure how to prove it because it seems so obvious.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127776980400085, "perplexity": 1039.2562831439466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118790.25/warc/CC-MAIN-20160428161518-00146-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/82557/extended-forms-from-foliations
# extended forms from foliations [closed] hi, i have the following question: Let $M$ be a n-dimensional manifold (or riemannian or everything thats nice ...) and let $\mathcal{F}$ be a foliation of $M$ by surfaces. Assume, furthermore, that on each leaf $\mathcal{L}$ there is a two form $\alpha_{\mathcal{L}}$. is it possible to construct a globally defined (smooth) 2-form $\alpha$ on $M$ out of the $\alpha_{\mathcal{L}}$? or are there any good books or papers were this is discussed? thanks in advance. janson - ## closed as too localized by Ryan Budney, Deane Yang, Willie Wong, Alain Valette, Mark SapirDec 5 '11 at 6:37 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. You appear to have misspelled your own name. As another person named "Jason", this has happened to me as well. –  Jason Starr Dec 3 '11 at 13:55 upps sorry ... actually its janson :). –  gary Dec 3 '11 at 13:57 What do you mean by $\alpha$ is defined on the foliation -- which topology on the foliation are you using -- is there any sense in which $\alpha$ is continuous or differentiable? Perhaps give a concrete example using an irrational foliation of the torus to let us know what context you're using. Regardless you question seems likely best-suited for math.stackexchange.com. –  Ryan Budney Dec 3 '11 at 20:35 When your manifold is Riemanniann, you can do something obvious: compose $\alpha_{\mathbb{L}}$ with the orthogonal projection from $T_x M$ to $T_x \mathcal{L}$ when $x\in \mathcal{L}$. But you will need some transverse regularity for the $\alpha_{\mathcal{L}}$ to ensure the resulting form to be smooth. This is not really a restriction, since anyway you can easily see that there are families of $\alpha_{\mathcal{L}}$ such that no $2$-forms $\alpha$ on $M$ can be smooth and restrict to $\alpha_{\mathcal{L}}$ along each leaf. Edit: About the "transverse regularity": if you compute the form $\alpha$ I propose, you'll see that it need not be smooth (or even continuous !); you need to assume that $\alpha_{\mathcal{L}_x}$ (where $\mathcal{L}_x$ is the leaf through $x$), depends smoothly on $x$. This is stronger than asking that each $\alpha_{\mathcal{L}}$ is smooth along $\mathcal{L}$; the difference is mainly about transverse variation of the base point. what do you mean by a transverse regularity for the $\alpha_{\mathcal{L}}$ ? –  gary Dec 3 '11 at 14:20 if i had this regularity, does this ensure the smootness of $\alpha$? –  gary Dec 3 '11 at 14:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172462821006775, "perplexity": 506.258342206259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989897.84/warc/CC-MAIN-20150728002309-00051-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/175094-lipschitz-condition.html
# Math Help - Lipschitz condition 1. ## Lipschitz condition For Lipschitz condition: for constant k on [a,b] if there is a constant k such that for all x,y in [a,b] |Tx -Ty| less than or equal to k|x-y| I have to show if: f(t,x)=|x|^(1/2) satisfies the condition I think it would but i am having trouble proving this. Is there a way to find a minimum value k such that all Tx-Ty satisfy this condition? 2. nevermind i think i figured it out. When x and y are close to 0 then then it would be impossible to fix a k.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189745783805847, "perplexity": 429.775397722495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00026-ip-10-164-35-72.ec2.internal.warc.gz"}
https://kevin-kotze.gitlab.io/tsm/ts-9-note/
There are many interesting studies that are applied to time series variables that consider the use of multivariate methods. These techniques seek to describe the information that is incorporated within the temporal and cross-sectional dependence of these variables. In most cases, the goal of the analysis is to provide a better understanding of the dynamic relationship between variables and in certain cases these techniques may be used to improve the forecasting accuracy. The models that have been developed within this area of research may also be used in policy analysis purposes or for making specific inference about potential relationships. Some of the early multivariate time series models fall within the class of linear distributed lag models. Early examples of these models include the polynomial and geometric distributed lag models. In addition, the autoregressive distributed lag (ARDL) model, which incorporates what have been termed the rational distributed lag model, continue to used in a number of studies that may be found in the current literature. # 1 Polynomial distributed lag models A general specification for the polynomial distributed lag (DL) model is: $\begin{eqnarray}\nonumber y_{t} &=& \alpha+\beta_{0}x_{t}+\beta_{1}x_{t-1}+\cdots+\beta_{q}x_{t-q}+\varepsilon_{t} \\ &=& \alpha+\beta(L)x_{t}+\varepsilon_{t} \tag{1.1} \end{eqnarray}$ where $$\varepsilon_{t}$$ is the part of the data that we are unable to explain and is assumed to be serially uncorrelated, while $\begin{eqnarray} \nonumber \beta(L)=\beta_{0}+\beta_{1}L+\ldots+\beta_{{q}}L^{{q}}, \end{eqnarray}$ is a lag polynomial and $$L$$ is the lag operator, defined by $\begin{eqnarray} L^{{i}}x_{t}=x_{{t}-i},\ i=0,1,2,\ \ldots \tag{1.2} \end{eqnarray}$ The lag coefficients are often restricted to lie on a polynomial of order $$r\leq {q}$$. In the case where $$r={q}$$ the distributed lag model is unrestricted. A comprehensive early treatment of rational distributed lag models can be found in Dhrymes (1971). # 2 Rational distributed lag models A slight modification to the above model, which involves the use of additional parameters, would provide the specification for what is termed the rational distributed lag model: $\begin{eqnarray} y_{t}= \alpha+\frac{\beta(L)}{\lambda(L)}x_{t}+v_{t} \tag{2.1} \end{eqnarray}$ where $$\beta(L)$$ as defined above and $\begin{eqnarray}\nonumber \lambda(L)=\lambda_{0}+\lambda_{1}L+\ldots+\lambda_{p}L^{p} \end{eqnarray}$ # 3 Autoregressive-distributed lag models The above model may be extended to incorporate a number of autoregressive terms. The gives rise to the ARDL $$(p,q)$$ model that may be specified as $\begin{eqnarray} y_{t}+\lambda_{1}y_{t-1}+\lambda_{2}y_{t-2}+\ldots+\lambda_{p} y_{t-p} = \alpha + \beta_0 {x}_{t}+ \beta_{1}x_{t-1}+\ldots+\beta_{q}x_{t-q}+\varepsilon_{t} \tag{3.1} \end{eqnarray}$ or $\begin{eqnarray} \lambda(L)y_{t}=\alpha+\beta(L)x_{{t}}+\varepsilon_{t} \tag{3.2} \end{eqnarray}$ Note that the rational distributed lag model defined by (2.1) can also be written in the form of an ARDL $$(p,\ {q})$$ model with moving average errors, namely $\begin{eqnarray} \nonumber \lambda(L)y_{t}=\alpha\lambda(1)+\beta(L)x_{t}+\lambda(L) \varepsilon_{t}, \end{eqnarray}$ which represents equation (3.1) except that the error term is now given by $\begin{eqnarray} \nonumber \varepsilon_{t}=\lambda(L) \upsilon_{t} \end{eqnarray}$ which takes the form of a moving average model. When comparing these specifications, it is worth noting that recent developments in time series analysis focus on the application of the ARDL $$(p,q)$$ specification since it is easier to work with and by selecting relatively large values for $$p$$ and $$q$$, one can provide a reasonable approximation to the rational distributed lag specification if required. Deterministic trends, or seasonal dummies, can be easily incorporated in the ARDL model. For example, we could have $\begin{eqnarray} y_{t}+\lambda_{1}y_{t-1}+\lambda_{2}y_{t-2}+\ldots+\lambda_{p}y_{t-p}=\zeta_0+\zeta_1 \cdot t + \beta_{0}x_{t}+\beta_{1} x_{t-1}+\ldots+\beta_{q}x_{t-q}+\varepsilon_{t} \tag{3.3} \end{eqnarray}$ The model that is provided by equation (3.1) may be extended to the case of $$k$$ regressors, each with a specific number of lags. Such a specification would take the form of an ARDL $$(p, q_{1}, q_{2}, \ldots, q_{k})$$ model, which may be expressed as $\begin{eqnarray} \nonumber \lambda(L,p) y_{t} = \sum_{j=1}^{k}\beta_{j}\left( L,{q}_{j} \right) x_{t,j}+\varepsilon_{t} \end{eqnarray}$ where $\begin{eqnarray} \nonumber \lambda(L,p)=1+\lambda_{1}L+\lambda_{2}L^{2}+\ldots+\lambda_{p}L^{p} \\ \beta_{j}(L, {q}_{j})=\beta_{j,0}+\beta_{j,1}L+\ldots+\beta_{j,q_{j}}L^{q_{j}},\;\;\; j=1,2, \ldots,\ k \end{eqnarray}$ See Hendry, Pagan, and Sargan (1984) for a comprehensive early review of ARDL models. # 4 Conclusion The framework for ARDL models make use of a relatively straightforward extension of the univariate autoregressive model, where we may include # 5 References Dhrymes, P. J. 1971. Distributed Lags: Problems of Estimation and Formulation. San Francisco: Holden-Day. Hendry, David F., Adrian R. Pagan, and J.Denis Sargan. 1984. “Dynamic Specification.” In Handbook of Econometrics, edited by Z. Grilichest and M. D. Intriligator, 2:1023–1100. Handbook of Econometrics. Elsevier.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9215892553329468, "perplexity": 694.7101895401637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00449.warc.gz"}
http://mathoverflow.net/users/37354/boyu-zhang
# Boyu Zhang less info reputation 110 bio website location age member for 1 year, 3 months seen yesterday profile views 227 # 11 Questions 8 Naturality of a Kunneth formula for cohomology 8 Reference request: Geodesic flow on a manifold with negative curvature is ergodic 7 Smoothing of the distance function on a Riemannian manifold 3 Classifying space for fibrations with Eilenberg-MacLane space as fibers 3 Existence of orientation preserving, finite order self homeomorphism on a genus 2 surface without fixed point # 373 Reputation +15 Classifying space for fibrations with Eilenberg-MacLane space as fibers +5 A fibration of classifying spaces +5 Naturality of a Kunneth formula for cohomology -2 The cohomology groups of $\Omega U(n)$ This user has not answered any questions # 21 Tags 0 at.algebraic-topology × 5 0 gauge-theory × 2 0 reference-request × 4 0 riemannian-geometry × 2 0 hyperbolic-geometry × 3 0 dg.differential-geometry × 2 0 classifying-spaces × 3 0 homotopy-theory × 2 0 homology × 2 0 fibration × 2 # 3 Accounts MathOverflow 373 rep 110 Mathematics 152 rep 5 TeX - LaTeX 101 rep
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150667786598206, "perplexity": 4184.179001346669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653672.23/warc/CC-MAIN-20141024030053-00165-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/65382-pythagorean-identities-write-expression-integer.html
Math Help - Pythagorean Identities to write expression as integer 1. Pythagorean Identities to write expression as integer Question is: Use Pythagorean identities to write the expression as an integer. Attached is the expression. How would you solve this one? Attached Thumbnails 2. Hi sec x is the hypotenuse of a right-angle triangle whose other sides are tan x and 1 Therefore $1 + tan^2x = sec^2x$ Then $tan^2x - sec^2x = -1$ List of trigonometric identities - Wikipedia, the free encyclopedia 3. Originally Posted by running-gag Hi sec x is the hypotenuse of a right-angle triangle whose other sides are tan x and 1 Therefore $1 + tan^2x = sec^2x$ Then $tan^2x - sec^2x = -1$ List of trigonometric identities - Wikipedia, the free encyclopedia Thanks but how would you solve the above especially with the angle 4B? 4. Originally Posted by mwok Thanks but how would you solve the above especially with the angle 4B? Let x=4B... 5. You are asked to write the expression as an integer Whatever x $tan^2x - sec^2x = -1$ If you like whatever $\beta$ $tan^2(4\beta) - sec^2(4\beta) = -1$ EDIT : beaten by Chris ! 6. Originally Posted by running-gag You are asked to write the expression as an integer Whatever x $tan^2x - sec^2x = -1$ If you like whatever $\beta$ $tan^2(4\beta) - sec^2(4\beta) = -1$ EDIT : beaten by Chris ! I see but how do I solve it? Like tan^4(4B)....what does that come out to? 7. Originally Posted by mwok I see but how do I solve it? Like tan^4(4B)....what does that come out to? What do you want to solve ? $tan^2(4\beta) - sec^2(4\beta)$ is not an equation It is just an expression you are asked to write as an integer (which is -1) 8. Okay, is this correct (different equation but same question). Attached Thumbnails 9. Not exactly $csc^2\theta-cot^2\theta=1$ 10. Originally Posted by running-gag Hi sec x is the hypotenuse of a right-angle triangle whose other sides are tan x and 1 Therefore $1 + tan^2x = sec^2x$ Then $tan^2x - sec^2x = -1$ List of trigonometric identities - Wikipedia, the free encyclopedia sorry to drudge up an old thread, but this problem is exactly the same problem i'm working on. I just don't get where -1 is coming from. if you substitute using the pythag. ident. the problem becomes $tan^24B-1+tan^24B$ no? 11. Because $tan^2 - sec^2 is = -1$ it is the same as saying $1+ tan^2 4\beta= sec^2 4 \beta$ $tan^2 4 \beta - sec^2 4 \beta = -1$ pretty much the $4 \beta$ doesnt matter it is jus like $\theta$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881691038608551, "perplexity": 1230.652953973214}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776401705.59/warc/CC-MAIN-20140707234001-00016-ip-10-180-212-248.ec2.internal.warc.gz"}
http://smlnj-gforge.cs.uchicago.edu/scm/viewvc.php/trunk/doc/whitepaper/image.tex?annotate=233&root=diderot&sortby=log&sortdir=down&pathrev=1306
Home My Page Projects Code Snippets Project Openings diderot # SCM Repository [diderot] Annotation of /trunk/doc/whitepaper/image.tex [diderot] / trunk / doc / whitepaper / image.tex # Annotation of /trunk/doc/whitepaper/image.tex 1 : jhr 233 %!TEX root = paper.tex 2 : % 3 : 4 : \section{Image analysis} 5 : \label{sec:image} 6 : 7 : % provide rationale for why the language looks the way it does 8 : % e.g. why do we fundmentally need derivatives 9 : % talk about general structure of these algorithms 10 : 11 : An overall goal in image-analysis methods is to correlate the 12 : physical, biological, and anatomical structure of the objects being 13 : scanned with the mathematical structure that is accessible in the 14 : digital image. 15 : Quantitative analysis of the scanned objects is computed 16 : from their measured image representations. 17 : We propose to simplify the 18 : work of developing, implementing, and applying image-analysis 19 : algorithms by including fundamental image operations directly in a 20 : domain-specific language. One elementary operation in a variety of 21 : analysis algorithms is convolution-based reconstruction of a 22 : continuous image domain from the discretely sampled values, which 23 : usefully mimics the underlying continuity of the measured space. A 24 : related operation is the point-wise measurement of derivatives and 25 : other locally-determined image attributes, which can support a richer 26 : mathematical vocabulary for detecting and delineating image features 27 : of interest. Matching the increasing flexibility of newer image 28 : acquisitions, the image values involved may not be grayscale scalar 29 : values, but multivariate vector or tensor values. 30 : As we illustrate 31 : below, with the abstraction of the continuous image and its derived 32 : attributes in place, it is straightforward to describe a variety of 33 : analysis algorithms in terms of potentially parallel threads of 34 : computation. Based on our previous experience, our proposed initial 35 : work focuses on direct volume rendering, fiber tractography, and 36 : particle-based feature sampling. 37 : 38 : \subsection{Continuous reconstruction and derived attributes} 39 : 40 : At the scale of current imaging, the physical world is continuous, so 41 : it is appropriate to build analysis methods upon abstractions that 42 : represent the image as a continuous domain. The boundaries and 43 : structures in the scanned object are generally not aligned in 44 : any consistent way with the image sampling grid, so visualization and 45 : analysis methods tend to work in the continuous \emph{world space} 46 : associated with the scanned object, rather than the discrete {\em 47 : index space} associated with the raster ordering of the image 48 : samples. A rich literature on signal processing provides us with a 49 : machinery for reconstructing continuous domains from discrete data via 50 : convolution~\cite{GLK:unser00,GLK:Meijering2002}. 51 : Rather than inventing 52 : information where it is not known, established methods for 53 : convolution-based reconstruction start with the recognition that some 54 : amount of blurring is intrinsic to any image measurement process: each 55 : sampled value records an integration over space as governed by a 56 : \emph{point-spread function}~\cite{GLK:Gonzalez2002}. Subsequent convolution-based 57 : reconstruction can use the same point-spread function or some 58 : approximation of it to recover the continuous field from which the 59 : discrete samples were drawn. 60 : % 61 : %The definition of convolution-based reconstruction starts with 62 : %representing the sequence of sampled image values $v[i]$, 63 : %defined for some range of integral values $i$, in a 64 : %one-dimensional continuous domain 65 : %as $v(x) = \sum_i v[i] \delta(x-i)$, where $\delta(x)$ 66 : %is the Dirac delta function. 67 : % 68 : The convolution of one-dimensional discrete data $v[i]$ with 69 : the continuous reconstruction kernel $h(x)$ is defined as~\cite{GLK:Gonzalez2002} 70 : \begin{equation*} 71 : (v * h)(x) = \int v(\xi) h(x - \xi) d\xi % _{-\infty}^{+\infty} 72 : = \int \sum_i v[i] \delta(\xi-i) h(x - \xi) d\xi % _{-\infty}^{+\infty} 73 : = \sum_i v[i] h(x - i) , 74 : \end{equation*} 75 : % ------------------- 76 : \begin{figure}[t] 77 : \vspace*{-4em} 78 : \begin{center} 79 : \begin{tabular}{ccccc} 80 : \includegraphics[width=0.28\hackwidth]{pictures/convo/data} 81 : & 82 : \raisebox{0.072\hackwidth}{\scalebox{1.2}{*}} 83 : & 84 : \includegraphics[width=0.28\hackwidth]{pictures/convo/kern0-bspl} 85 : & 86 : \raisebox{0.08\hackwidth}{\scalebox{1.2}{=}} 87 : & 88 : \includegraphics[width=0.28\hackwidth]{pictures/convo/data0-bspl} \\ 89 : $v[i]$ 90 : && 91 : $h(x)$ 92 : && 93 : $f(x) = (v * h)(x)$ 94 : \end{tabular} 95 : \end{center}% 96 : \caption{Reconstructing a continuous $f(x)$ by 97 : convolving discrete data $v[i]$ with kernel $h(x)$} 98 : \label{fig:convo-demo} 99 : \end{figure}% 100 : % ------------------- 101 : which is illustrated in Figure~\ref{fig:convo-demo}. 102 : In practice, the bounds of the summation will be limited by the 103 : \emph{support} of $h(x)$: the range of positions for which $h(x)$ 104 : is nonzero. 105 : % The result is the same as adding together copies 106 : %of $h(-x)$ located at the integers $i$, and scaled by $v[i]$. 107 : Using {\em separable} convolution, a one-dimensional reconstruction kernel $h(x)$ generates a 108 : three-dimensional reconstruction kernel $h(x,y,z) = h(x)h(y)h(z)$. 109 : The three-dimensional separable convolution sums over three image indices 110 : of the sampled data $v[i,j,k]$: 111 : \begin{eqnarray*} 112 : (v * h)(x,y,z) &=& \sum_{i,j,k} v[i,j,k] h(x - i) h(y - j) h(z - k) \; . 113 : \end{eqnarray*}% 114 : 115 : Many traditional image-analysis methods rely on measurements of the 116 : local rates of changes in the image, commonly quantified with the 117 : first derivative or \emph{gradient} and the second derivative or {\em 118 : Hessian}, which are vector and second-order tensor quantities, 119 : respectively~\cite{GLK:Marsden1996}. Our work focuses on three 120 : dimension image domains, in which the gradient is a represented as a 121 : 3-vector, and the Hessian as a $3 \times 3$ symmetric matrix. 122 : \begin{equation*} 123 : \nabla f = \begin{bmatrix} 124 : \frac{\partial f}{\partial x} \\ 125 : \frac{\partial f}{\partial y} \\ 126 : \frac{\partial f}{\partial z} 127 : \end{bmatrix} ; \, 128 : \mathbf{H} f = \begin{bmatrix} 129 : \frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} & \frac{\partial^2 f}{\partial x \partial z} \\ 130 : & \frac{\partial^2 f}{\partial y^2} & \frac{\partial^2 f}{\partial y \partial z} \\ 131 : (sym) & & \frac{\partial^2 f}{\partial z^2} 132 : \end{bmatrix} 133 : \end{equation*}% 134 : The gradient plays a basic role in many edge detection algorithms, and the 135 : Hessian figures in ridge and valley detection. The constituent 136 : partial derivatives are measured by convolving with the derivatives of a reconstruction kernel, due to the 137 : linearity of differentiation and convolution: 138 : \begin{eqnarray*} 139 : \frac{\partial(v * h)(x,y,z)}{\partial x} &=& \sum_{i,j,k} v[i,j,k] h'(x - i) h(y - j) h(z - k) \\ 140 : \frac{\partial^2(v * h)(x,y,z)}{\partial x \partial y} &=& \sum_{i,j,k} v[i,j,k] h'(x - i) h'(y - j) h(z - k) . 141 : \end{eqnarray*}% 142 : 143 : %\begin{equation*} 144 : %\left. \frac{d(v * h)(x)}{dx} \right|_{x = x_0} 145 : %= \left. \frac{d \sum_i v[i] h(x - i)}{dx}\right|_{x = x_0} 146 : %= \sum_i v[i] h'(x_0 - i) 147 : %= (v * h')(x_0) \; . 148 : %\end{equation*}% 149 : 150 : Finally, while the description above has focused in scalar-valued (grayscale) 151 : images, much of modern imaging is producing more complex and 152 : multi-variate images, such as vectors from particle image 153 : velocimetry~\cite{GLK:Raffel2002} and diffusion tensors estimated from 154 : diffusion-weighted MRI~\cite{GLK:Basser1994a}. In these situations, 155 : the analysis algorithms may directly use the multi-variate data 156 : values, but it is common to access the structure in the images through 157 : some scalar quantity computed from the data, which effectively 158 : condenses or simplifies the information. 159 : In a vector field, one may 160 : need only the vector magnitude (the speed) for finding features of 161 : interest, but in a tensor field, various scalar-valued tensor invariants may 162 : play a role in the analysis, such as the trace $\mathrm{tr}(\mathbf{D})$ 163 : or the determinant $\det(\mathbf{D})$ 164 : of a tensor $\mathbf{D}$. 165 : Diffusion tensor analysis typically 166 : involves a particular measure of the directional dependence of diffusivity, 167 : termed Fractional Anisotropy (FA), which is defined as 168 : \begin{equation*} 169 : \mathrm{FA}(\mathbf{F})(\mathbf{x}) = 170 : \sqrt{\frac{3 \, \mathrm{tr}(D D^T)}{\mathrm{tr}(\mathbf{D} \mathbf{D}^T)}} \in \mathbb{R} 171 : \end{equation*}% 172 : where $\mathbf{D} = \mathbf{F}(\mathbf{x})$ and $D = \mathbf{D} - \langle \mathbf{D} \rangle \mathbf{I}$. 173 : Note that both vector magnitude $|\mathbf{v}|$ and tensor 174 : fractional anisotropy $\mathrm{FA}(\mathbf{D})$ are non-linear 175 : (\ie{}, $|\mathbf{v} + \mathbf{u}| \neq |\mathbf{v}| + |\mathbf{u}|$ 176 : and $\mathrm{FA}(\mathbf{D} + \mathbf{E}) \neq \mathrm{FA}(\mathbf{D}) + \mathrm{FA}(\mathbf{E})$), 177 : which means that their spatial derivatives 178 : are non-trivial functions of the derivatives of the image values. 179 : The chain rule must be applied to determine the exact formulae, 180 : and computing these correctly is one challenge in implementing image-analysis 181 : methods that operate on vector or tensor fields~\cite{GLK:Kindlmann2007}. 182 : Having language-level support for evaluating these functions and their 183 : spatial derivatives would simplify the implementation of analysis algorithms 184 : on multi-variate images. 185 : 186 : \subsection{Applications} 187 : % isosurface 188 : % creases (ridges & valleys) 189 : % edges 190 : % an overview of the computational model of image analysis 191 : % local vs. global 192 : 193 : The initial application of our domain-specific language focuses on 194 : research areas in scientific visualization and image analysis, with 195 : which we have previous experience. 196 : In this section, we describe three of these types of algorithms. 197 : Volume rendering is included as a simple visualization method that 198 : showcases convolution-based measurements of values and 199 : derived attributes, a computational step that appears in the 200 : inner loop of the other two algorithms. 201 : Fiber tractography uses pointwise estimates 202 : of axonal direction to trace paths of 203 : possible neural connectivity in diffusion tensor fields. 204 : Particle-based methods use a combination of energy terms and 205 : feature constraints to compute uniform samplings of continuous 206 : image features. 207 : 208 : \subsection{Direct volume rendering} 209 : \label{sec:dvr} 210 : 211 : Direct volume rendering is a standard method for visually indicating 212 : the over-all structure in a volume dataset with a rendered image~\cite{GLK:drebin88,GLK:levoy88}. 213 : It sidesteps explicitly computing 214 : geometric (polygonal) models of features in the data, in contrast 215 : to methods like Marching Cubes that create a triangular 216 : mesh of an isosurface~\cite{GLK:Lorensen1987}. 217 : Volume rendering maps more 218 : directly from the image samples to the rendered image, by assigning 219 : colors and opacities to the data values via user-specified {\em transfer function}. 220 : % 221 : \begin{algorithm} 222 : \caption{Direct volume render continuous field $V$, which maps 223 : from position $\mathbf{x}$ to data values $V(\mathbf{x})$, 224 : with transfer function $F$, which maps from data values to color and opacity.} 225 : \label{alg:volrend} 226 : \begin{algorithmic} 227 : \FOR{samples $(i,j)$ \textbf{in} rendered image $m$} 228 : \STATE $\mathbf{x}_0 \leftarrow$ origin of ray through $m[i,j]$ 229 : \STATE $\mathbf{r} \leftarrow$ unit-length direction origin of ray through $m[i,j]$ 230 : \STATE $t \leftarrow 0$ \COMMENT{initialize ray parametrization} 231 : \STATE $\mathbf{C} \leftarrow \mathbf{C}_0$ \COMMENT{initialize color $\mathbf{C}$} 232 : \WHILE{$\mathbf{x}_0 + t \mathbf{r}$ within far clipping plane} 233 : \STATE $\mathbf{v} \leftarrow V(\mathbf{x}_0 + t \mathbf{r})$ \COMMENT{learn values and derived attributes at current ray position} 234 : \STATE $\mathbf{C} \leftarrow \mathrm{composite}(\mathbf{C}, F(\mathbf{v}))$ \COMMENT{apply transfer function and accumulate color} 235 : \STATE $t \leftarrow t + t_{step}$ 236 : \ENDWHILE 237 : \STATE $m[i,j] \leftarrow \mathbf{C}$ 238 : \ENDFOR 239 : \end{algorithmic} 240 : \end{algorithm}% 241 : % 242 : Algorithm~\ref{alg:volrend} gives pseudocode for a basic volume 243 : renderer. The inner loop depends on learning (via 244 : convolution-based reconstruction) data values and 245 : attributes in volume $V$ at position $\mathbf{x}_0 + t \mathbf{r}$. These are 246 : then mapped through the transfer function $F$, which determines 247 : the rendered appearance of the data values. 248 : 249 : GPU-based methods for direct volume rendering are increasingly 250 : common~\cite{GLK:Ikits2005}. The implementations typically involve 251 : hand-coded GPU routines designed around particular hardware 252 : architectures, with an eye towards performance rather than 253 : flexibility. Although the outcome of our research may not out-perform 254 : hand-coded volume-rendering implementations, we plan to use volume 255 : rendering as an informative and self-contained test-case for 256 : expressing high-quality convolution-based reconstruction in a 257 : domain-specific language. 258 : 259 : \subsection{Fiber tractography} 260 : \label{sec:fibers} 261 : 262 : Diffusion tensor magnetic resonance imaging (DT-MRI or DTI) uses a 263 : tensor model of diffusion-weighted MRI to describe the 264 : directional structure of tissue \emph{in vivo}~\cite{GLK:Basser1994}. 265 : % 266 : The coherent organization of axons in the white matter of the brain 267 : tissue shapes the pattern of diffusion of water within 268 : it~\cite{GLK:Pierpaoli1996}. The directional dependence of the 269 : apparent diffusion coefficient is termed \emph{anisotropy}, and 270 : quantitative measurements of anisotropy and its orientation have 271 : enabled research on white matter organization in health and 272 : disease~\cite{GLK:Bihan2003}. 273 : % 274 : Empirical evidence has shown that in much of the white matter, the 275 : principal eigenvector $\mathbf{e}_1$ of the diffusion tensor $\mathbf{D}$ 276 : indicates the local direction of axon bundles~\cite{GLK:Beaulieu2002}. 277 : % 278 : Using integrals of the principal 279 : eigenvector, we can compute paths through a diffusion tensor field that approximate axonal connectivity~\cite{GLK:conturo99,GLK:BasserMRM00}. 280 : % 281 : The path integration algorithm, termed \emph{tractography}, is shown 282 : % 283 : \begin{algorithm} 284 : \caption{$\mathrm{tract}(\mathbf{p})$: integrate path everywhere tangent to 285 : principal eigenvector of tensor field $\mathbf{D}(\mathbf{x})$ starting 286 : at seed point $\mathbf{p}$} 287 : \label{alg:tracto} 288 : \begin{algorithmic} 289 : \STATE $T_- \leftarrow [] ;\, T_+ \leftarrow []$ \COMMENT{initialize forward and backward halves of path} 290 : \FOR{$d \in \{+1, -1\}$} 291 : \STATE $\mathbf{x} \leftarrow \mathbf{p}$ 292 : \STATE $\mathbf{v} \leftarrow d \mathbf{e}_1(\mathbf{D}(\mathbf{p}))$ \COMMENT{use $d$ to determine starting direction from $\mathbf{p}$} 293 : \REPEAT 294 : \STATE $\mathbf{x} \leftarrow \mathbf{x} + s \mathbf{v}$ \COMMENT{Euler integration step} 295 : \STATE $\mathbf{u} \leftarrow \mathbf{e}_1(\mathbf{D}(\mathbf{x}))$ 296 : \IF {$\mathbf{u} \cdot \mathbf{v} < 0$} 297 : \STATE $\mathbf{u} \leftarrow -\mathbf{u}$ \COMMENT{fix direction at new position to align with incoming direction} 298 : \ENDIF 299 : \STATE $\mathrm{append}(T_d, \mathbf{x})$ 300 : \STATE $\mathbf{v} \leftarrow \mathbf{u}$ \COMMENT{save new position to path so far} 301 : \UNTIL {$\mathrm{CL}(\mathbf{D}(\mathbf{x})) < \mathrm{CL}_{\mathrm{min}}$} \COMMENT{anisotropy measure CL quantifies numerical certainty in $\mathbf{e}_1$} 302 : \ENDFOR 303 : \RETURN $\mathrm{concat}(\mathrm{reverse}(T_-), [\mathbf{p}], T_+)$ 304 : \end{algorithmic} 305 : \end{algorithm} 306 : % 307 : in Algorithm~\ref{alg:tracto}. Euler integration is used here for 308 : illustration, but Runge-Kutta integration is also common~\cite{GLK:NR}. 309 : The termination condition for tractography is when the numerical 310 : certainty of the principle eigenvector $\mathbf{e}_1$ is too low to 311 : warrant further integration. The CL anisotropy measure is, like FA, a 312 : scalar-valued tensor invariant~\cite{GLK:Westin2002}. The result of 313 : tractography is a list of vertex locations along a polyline that 314 : roughly indicates the course of axon bundles through the brain. 315 : 316 : \subsection{Particle systems} 317 : \label{sec:particles} 318 : 319 : Some tasks can be naturally decomposed into computational units, which 320 : can be termed \emph{particles}, that individually implement some 321 : uniform behavior with respect to their environment so that their 322 : iterated collective action represents the numerical solution to an 323 : optimization or simulation. Such particle systems have a long history 324 : in computer graphics for, among other applications, the simulation and 325 : rendering of natural phenomenon~\cite{GLK:Reeves1983}, the organic 326 : generation of surface textures~\cite{GLK:Turk1991}, and the 327 : interactive sculpting and manipulation of surfaces in 328 : three-dimensions~\cite{GLK:Szeliski1992,GLK:Witkin1994}. Other recent 329 : work, described in more detail below, solves biomedical image-analysis 330 : tasks with data-driven particle systems, indicating their promise for 331 : a broader range of scientific applications. Particle systems 332 : represent a challenging but rewarding target for our proposed 333 : domain-specific language. While the rays and paths of volume rendering 334 : (\secref{sec:dvr}) and fiber tractography 335 : (\secref{sec:fibers}) do not interact with each other and thus 336 : can easily be computed in parallel, particle systems do require 337 : inter-particle interactions, which constrains the structure of 338 : parallel implementations. 339 : 340 : Particle systems recently developed for image analysis are designed to 341 : compute a uniform sampling of the spatial extent of an image feature, 342 : where the feature is defined as the set of locations satisfying some 343 : equation involving locally measured image attributes. For example, 344 : the isosurface $S_i$ (or implicit surface) of a scalar field 345 : $f(\mathbf{x})$ at isovalue $v$ is defined by $S_v = \{\mathbf{x} | 346 : f(\mathbf{x}) = v\}$. Particle systems for isosurface features 347 : have been studied in 348 : visualization~\cite{GLK:Crossno1997,GLK:Meyer2005,GLK:Meyer2007a}, and 349 : have been adapted in medical image analysis for group studies of shape 350 : variation of pre-segmented objects~\cite{GLK:Cates2006,GLK:Cates2007}. 351 : In this approach, the particle system is a combination of two components. 352 : First, particles are constrained to the isosurface by application 353 : of Algorithm~\ref{alg:iso-newton}, 354 : % 355 : \begin{algorithm} 356 : \caption{$\mathrm{constr}_v(\mathbf{x})$: move position $\mathbf{x}$ onto isosurface 357 : $S_v = \{\mathbf{x} | f(\mathbf{x}) = v_0\}$} 358 : \label{alg:iso-newton} 359 : \begin{algorithmic} 360 : \REPEAT 361 : \STATE $(v,\mathbf{g}) \leftarrow (f(\mathbf{x}) - v_0,\nabla f(\mathbf{x}))$ 362 : \STATE $\mathbf{s} \leftarrow v \frac{\mathbf{g}}{\mathbf{g}\cdot\mathbf{g}}$ \COMMENT{Newton-Raphson step} 363 : \STATE $\mathbf{x} \leftarrow \mathbf{x} - \mathbf{s}$ 364 : \UNTIL{$|\mathbf{s}| < \epsilon$} 365 : \RETURN $\mathbf{x}$ 366 : \end{algorithmic} 367 : \end{algorithm} 368 : % 369 : a numerical search computed independently for each particle. Second, particles 370 : are endowed with a radially decreasing potential energy function $\phi(r)$, so that 371 : the minimum energy configuration is a spatially uniform sampling of the surface 372 : (according to Algorithm~\ref{alg:particles} below). 373 : 374 : Our recent work has extended this methodology to a larger set of image 375 : features, ridges and valleys (both surfaces and lines), in order to detect and sample image 376 : features in unsegmented data, which is more computationally demanding 377 : because of the feature definition~\cite{GLK:Kindlmann2009}. Ridges 378 : and valleys of the scalar field $f(\mathbf{x})$ are defined in terms of 379 : the gradient $\nabla f$ and the eigenvectors $\{\mathbf{e}_1, 380 : \mathbf{e}_2, \mathbf{e}_3\}$ of the Hessian $\mathbf{H} f$ with 381 : corresponding eigenvalues $\lambda_1 \geq \lambda_2 \geq \lambda_3$ 382 : ($\mathbf{H} \mathbf{e}_i = \lambda_i 383 : \mathbf{e}_i$)~\cite{GLK:Eberly1996}. A ridge surface $S_r$, for 384 : example, is defined in terms of the gradient and the minor eigenvector 385 : $\mathbf{e}_3$ of the Hessian: $S_r = \{\mathbf{x} | \mathbf{e}_3(\mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0 \}$. 386 : Intuitively $S_r$ is the set of locations where motion 387 : along $\mathbf{e}_3$ can not increase the height further. 388 : % 389 : \begin{algorithm} 390 : \caption{$\mathrm{constr}_r(\mathbf{x})$: move position $\mathbf{x}$ onto ridge surface 391 : $S_r = \{\mathbf{x} | \mathbf{e}_3(\mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0 \}$} 392 : \label{alg:sridge} 393 : \begin{algorithmic} 394 : \REPEAT 395 : \STATE $(\mathbf{g},\mathbf{H}) \leftarrow (\nabla f(\mathbf{x}), 396 : \mathbf{H} f(\mathbf{x}))$ 397 : % \STATE $(\{\lambda_1,\lambda_2,\lambda_3\}, 398 : % \{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}) = \mathrm{eigensolve}(\mathbf{H})$ 399 : \STATE $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\} = \mathrm{eigenvectors}(\mathbf{H})$ 400 : \STATE $\mathbf{P} = \mathbf{e}_3 \otimes \mathbf{e}_3$ \COMMENT{tangent to allowed motion} 401 : \STATE $\mathbf{n} = \mathbf{P} \mathbf{g} / |\mathbf{P} \mathbf{g}|$ \COMMENT{unit-length direction of constrained gradient ascent} 402 : \STATE $(f',f'') = (\mathbf{g} \cdot \mathbf{n}, \mathbf{n} \cdot \mathbf{H} \mathbf{n})$ \COMMENT{directional derivatives of $f$ along $\mathbf{n}$} 403 : \IF {$f'' \geq 0$} 404 : \STATE $s \leftarrow s_0 |\mathbf{P} \mathbf{g}|$ \COMMENT{regular gradient ascent if concave-up} 405 : \ELSE 406 : \STATE $s \leftarrow -f'/f''$ \COMMENT{aim for top of parabola (2nd-order fit) if concave-down} 407 : \ENDIF 408 : \STATE $\mathbf{x} \leftarrow \mathbf{x} + s \mathbf{n}$ 409 : \UNTIL{$s < \epsilon$} 410 : \RETURN $\mathbf{x}$ 411 : \end{algorithmic} 412 : \end{algorithm} 413 : % 414 : The numerical search in Algorithm~\ref{alg:sridge} to locate the ridge surface 415 : is more complex than Algorithm~\ref{alg:iso-newton} for isosurfaces, but it 416 : plays the same role in the overall particle system. 417 : 418 : After the definition of the feature constraint, the other major piece 419 : in the system is the per-particle energy function $\phi(r)$, which 420 : completely determines the particle system energy. 421 : $\phi(r)$ is maximal at $r=0$ and decreases to $\phi(r)=0$ beyond an $r_{\mathrm{max}}$, 422 : determining the radius of influence of a single particle. Particles move 423 : in the system only to minimize the energy due to their neighbors. 424 : % 425 : %\begin{algorithm} 426 : %\caption{$\mathrm{push}(\mathbf{p}, \mathcal{R})$: Sum energy $e$ and force $\mathbf{f}$ 427 : %at $\mathbf{p}$ from other particles in set $\mathcal{R}$ due to potential function $\phi(r)$} 428 : %\label{alg:push} 429 : %\begin{algorithmic} 430 : %\STATE $(e,\mathbf{f}) \leftarrow (0,\mathbf{0})$ 431 : %\FOR{ $\mathbf{r}$ \textbf{in} $\mathcal{R}$ } 432 : % \STATE $\mathbf{d} \leftarrow \mathbf{p} - \mathbf{r}$ 433 : % \STATE $e \leftarrow e + \phi(|\mathbf{d}|)$ 434 : % \STATE $\mathbf{f} \leftarrow \mathbf{f} - \phi'(|\mathbf{d}|)\frac{\mathbf{d}}{|\mathbf{d}|}$ \COMMENT{force is negative spatial gradient of energy} 435 : %\ENDFOR 436 : %\RETURN $(e,\mathbf{f})$ 437 : %\end{algorithmic} 438 : %\end{algorithm} 439 : % 440 : \begin{algorithm} 441 : \caption{Subject to constraint feature $\mathrm{constr}$, 442 : move particle set $\mathcal{P}$ to energy minima, given 443 : potential function $\phi(r)$, $\phi(r) = 0$ for $r > r_{\mathrm{max}}$} 444 : \label{alg:particles} 445 : \begin{algorithmic} 446 : \REPEAT 447 : \STATE $E \leftarrow 0$ \COMMENT{total system energy sum} 448 : \FOR{ $\mathbf{p}_i \in \mathcal{P}$} 449 : \STATE $\mathcal{R} \leftarrow \mathrm{neighb}(\mathbf{p}_i)$ \COMMENT{$\mathcal{R} \subset \mathcal{P}$ is all particles within $r_{\mathrm{max}}$ of $\mathbf{p}_i$} 450 : \STATE $e \leftarrow \sum_{\mathbf{r} \in \mathcal{R}}{\phi(|\mathbf{p} - \mathbf{r}|)}$ \COMMENT{sum energy contributions from neighbors} 451 : \STATE $\mathbf{f} \leftarrow - \sum_{\mathbf{r} \in \mathcal{R}}{\phi'(|\mathbf{p} - \mathbf{r}|)\frac{\mathbf{p} - \mathbf{r}}{|\mathbf{p} - \mathbf{r}|}}$ \COMMENT{force is negative spatial gradient of energy} 452 : \REPEAT 453 : \STATE $\mathbf{p}' \leftarrow \mathbf{p}_i + s_i \mathbf{f}$ \COMMENT{particle $\mathbf{p}_i$ maintains a gradient descent step size $s_i$} 454 : \STATE $\mathbf{p}' \leftarrow \mathrm{constr}(\mathbf{p}')$ \COMMENT{re-apply constraint for every test step} 455 : \STATE $e' \leftarrow \sum_{\mathbf{r} \in \mathrm{neighb}(\mathbf{p}')}{\phi(|\mathbf{p}' - \mathbf{r}|)}$ 456 : \IF {$e' > e$} 457 : \STATE $s_i \leftarrow s_i / 5$ \COMMENT{decrease step size if energy increased} 458 : \ENDIF 459 : \UNTIL {$e' < e$} 460 : \STATE $\mathbf{p}_i \leftarrow \mathbf{p}'$ \COMMENT{save new position at lower energy} 461 : \STATE $E \leftarrow E + e'$ 462 : \ENDFOR 463 : \UNTIL {$\Delta E < \epsilon$} 464 : \end{algorithmic} 465 : \end{algorithm} 466 : % 467 : Each iteration of Algorithm~\ref{alg:particles} moves particles individually 468 : (and asynchronously) to lower their own energy, ensuring 469 : that the total system energy is consistently reduced, until the system 470 : reaches a local minimum in its energy configuration. Experience has shown that 471 : the local minima found by this gradient descent approach are sufficiently 472 : optimal for the intended applications of the system. 473 : % 474 : \begin{figure}[ht] 475 : %\def\subfigcapskip{-1pt} % between subfig and its own caption 476 : %\def\subfigtopskip{4pt} % between subfig caption and next subfig 477 : %\def\subfigbottomskip{-4pt} % also between subfig caption and next subfig 478 : %\def\subfigcapmargin{0pt} % margin on subfig caption 479 : \centering{ 480 : \subfigure[Lung airway segmentation]{ 481 : \includegraphics[width=0.264\columnwidth]{pictures/lung-ccpick-iso} 482 : \label{fig:lung} 483 : } 484 : \hspace{0.02\columnwidth} 485 : \subfigure[Brain white matter skeleton]{ 486 : \includegraphics[width=0.6292\columnwidth]{pictures/wb} 487 : \label{fig:brain} 488 : } 489 : } 490 : \caption{Example results from particle systems in lung CT (a) and 491 : brain DTI (b).} 492 : \label{fig:lung-brain} 493 : \end{figure} 494 : % 495 : Figure~\ref{fig:lung-brain} shows some results with our current particle system~\cite{GLK:Kindlmann2009}. 496 : The valley lines of a CT scan, sampled and displayed in Fig.~\ref{fig:lung} 497 : successfully capture the branching airways of the lung (as compared to a previous 498 : segmentation method shown with a white semi-transparent surface). The ridge surfaces 499 : of FA in a brain DTI scan, displayed in Fig.~\ref{fig:brain}, capture the 500 : major pieces of white matter anatomy, and can serve as a structural skeleton for white matter analysis. 501 : Further improvement of the method is currently hindered by the complexity of its C implementation (in Teem~\cite{teem}), 502 : and the long execution times on a single CPU. 503 : 504 : \subsection{Future analysis tasks} 505 : \label{sec:future} 506 : 507 : Image analysis remains an active research area, and many experimental 508 : algorithms diverge from those shown above. Many research methods, 509 : however, conform to or build upon the basic structure of the 510 : algorithms show above, and our proposed language will facilitate 511 : their implementation and experimental application. For example, 512 : the fiber tractography in \secref{sec:fibers} computed 513 : paths through a pre-computed field of diffusion tensors, which 514 : were estimated per-sample from a larger collection of diffusion-weighted 515 : magnetic resonance images (DW-MRI). 516 : State-of-the-art tractography algorithms interpolate the DW-MRI values, 517 : and perform the per-point model-fitting and fiber orientation computations inside the 518 : path integration~\cite{GLK:Schultz2008,GLK:Qazi2009}. However, these advanced methods 519 : can also be expressed as a convolution-based reconstruction followed by 520 : computation of non-linear derived attributes, so the basic structure of the 521 : tractography Algorithm~\ref{alg:tracto} is preserved. Being able to 522 : rapidly create new tractography methods with different models for diffusion 523 : and fiber orientation would accelerate research in determining brain 524 : structure from diffusion imaging. 525 : 526 : Many other types of image features can conceivably be recovered with 527 : the types of particle systems described in Section~\ref{sec:particles} 528 : and Algorithm~\ref{alg:particles}. Essentially, implementing the 529 : routine for constraining particles to stay within a new feature type 530 : (analogous to Algorithms~\ref{alg:iso-newton} and~\ref{alg:sridge} for 531 : isosurfaces and ridges) is the hard part of creating a particle system 532 : to detect and sample the feature. One classical type of image feature 533 : is Canny edges, which are essentially ridge surfaces of gradient 534 : magnitude~\cite{GLK:Canny1986}. Although these are well-understood 535 : for two-dimensional images, they have been applied less frequently in 536 : three-dimensional medical images, and very rarely to derived 537 : attributes of multi-variate data. One could imagine, for example, a 538 : particle system that computes a brain white matter segmentation by finding Canny 539 : edges of the fractional anisotropy (FA) of a diffusion tensor field. 540 : The variety of attributes (and their spatial derivatives) available in 541 : modern multi-variate medical images suggests that there are many 542 : analysis algorithms that have yet to be explored because of the 543 : daunting implementation complexity and computation cost.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139223456382751, "perplexity": 2314.9337749573533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00468.warc.gz"}
https://courses.lumenlearning.com/ivytech-collegealgebra/chapter/solutions-10/
## Solutions to Try Its 1. a. 35 b. 330 2. a. ${x}^{5}-5{x}^{4}y+10{x}^{3}{y}^{2}-10{x}^{2}{y}^{3}+5x{y}^{4}-{y}^{5}$ b. $8{x}^{3}+60{x}^{2}y+150x{y}^{2}+125{y}^{3}$ 3. $-10,206{x}^{4}{y}^{5}$ ## Solutions to Odd-Numbered Exercises 1. A binomial coefficient is an alternative way of denoting the combination $C\left(n,r\right)$. It is defined as $\left(\begin{array}{c}n\\ r\end{array}\right)=C\left(n,r\right)=\frac{n!}{r!\left(n-r\right)!}$. 3. The Binomial Theorem is defined as ${\left(x+y\right)}^{n}=\sum _{k=0}^{n}\left(\begin{array}{c}n\\ k\end{array}\right){x}^{n-k}{y}^{k}$ and can be used to expand any binomial. 5. 15 7. 35 9. 10 11. 12,376 13. $64{a}^{3}-48{a}^{2}b+12a{b}^{2}-{b}^{3}$ 15. $27{a}^{3}+54{a}^{2}b+36a{b}^{2}+8{b}^{3}$ 17. $1024{x}^{5}+2560{x}^{4}y+2560{x}^{3}{y}^{2}+1280{x}^{2}{y}^{3}+320x{y}^{4}+32{y}^{5}$ 19. $1024{x}^{5}-3840{x}^{4}y+5760{x}^{3}{y}^{2}-4320{x}^{2}{y}^{3}+1620x{y}^{4}-243{y}^{5}$ 21. $\frac{1}{{x}^{4}}+\frac{8}{{x}^{3}y}+\frac{24}{{x}^{2}{y}^{2}}+\frac{32}{x{y}^{3}}+\frac{16}{{y}^{4}}$ 23. ${a}^{17}+17{a}^{16}b+136{a}^{15}{b}^{2}$ 25. ${a}^{15}-30{a}^{14}b+420{a}^{13}{b}^{2}$ 27. $3,486,784,401{a}^{20}+23,245,229,340{a}^{19}b+73,609,892,910{a}^{18}{b}^{2}$ 29. ${x}^{24}-8{x}^{21}\sqrt{y}+28{x}^{18}y$ 31. $-720{x}^{2}{y}^{3}$ 33. $220,812,466,875,000{y}^{7}$ 35. $35{x}^{3}{y}^{4}$ 37. $1,082,565{a}^{3}{b}^{16}$ 39. $\frac{1152{y}^{2}}{{x}^{7}}$ 41. ${f}_{2}\left(x\right)={x}^{4}+12{x}^{3}$ 43. ${f}_{4}\left(x\right)={x}^{4}+12{x}^{3}+54{x}^{2}+108x$ 45. $590,625{x}^{5}{y}^{2}$ 47. $\left(\begin{array}{c}n\\ k - 1\end{array}\right)+\left(\begin{array}{l}n\\ k\end{array}\right)=\left(\begin{array}{c}n+1\\ k\end{array}\right)$; Proof: $\begin{array}{}\\ \\ \\ \left(\begin{array}{c}n\\ k - 1\end{array}\right)+\left(\begin{array}{l}n\\ k\end{array}\right)\\ =\frac{n!}{k!\left(n-k\right)!}+\frac{n!}{\left(k - 1\right)!\left(n-\left(k - 1\right)\right)!}\\ =\frac{n!}{k!\left(n-k\right)!}+\frac{n!}{\left(k - 1\right)!\left(n-k+1\right)!}\\ =\frac{\left(n-k+1\right)n!}{\left(n-k+1\right)k!\left(n-k\right)!}+\frac{kn!}{k\left(k - 1\right)!\left(n-k+1\right)!}\\ =\frac{\left(n-k+1\right)n!+kn!}{k!\left(n-k+1\right)!}\\ =\frac{\left(n+1\right)n!}{k!\left(\left(n+1\right)-k\right)!}\\ =\frac{\left(n+1\right)!}{k!\left(\left(n+1\right)-k\right)!}\\ =\left(\begin{array}{c}n+1\\ k\end{array}\right)\end{array}$ 49. The expression ${\left({x}^{3}+2{y}^{2}-z\right)}^{5}$ cannot be expanded using the Binomial Theorem because it cannot be rewritten as a binomial.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859349131584167, "perplexity": 60.64340723298866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00543.warc.gz"}
https://brilliant.org/discussions/thread/54/
× # Deleted This discussion has been deleted! 2 years, 8 months ago Sort by: $$\frac {2-2} {2-2}$$ is indeterminate, not 1... - 2 years, 8 months ago Division by zero is not allowed. If so, then 999999999 = 1 because 999999999(0) = 1(0) - 2 years, 7 months ago well this was what exactly i said to my friend when he asked me but he didn't agree and said that " what if i didn't keep the value of 2 - 2 ." :p but i posted it on brilliant for fun. - 2 years, 7 months ago From the fifth to the sixth line, you divided by zero xD. - 2 years, 7 months ago plz try my set # https://brilliant.org/profile/ashwin-1w8nuy/sets/favorites/?ref_id=627509 & reshare if you like it - 2 years, 7 months ago It's a little big, but yeah i just checked it and i'll start solving your set's problems by this evening. Thanks, dude. - 2 years, 7 months ago Thanks for trying it. ya i know that its long but i didn't broke it into more sets because i was tired to classify them as it is truly long. but i think that it a nice set as most of the questions don't need higher maths and rules (which i don't know :p) P.S sorry for replying late but i couldn't reply as my 10th class exams are going on. - 2 years, 7 months ago Don't worry, dude xD. I am solving your set really slowly because I just started college this monday, and I'm moving things, getting my driver's license and everything else, so i'm really busy too. I'll finish as soon as I can. - 2 years, 7 months ago well same here . I am stuck in my exams.. - 2 years, 7 months ago 2-2/2-2 is not defined - 2 years, 1 month ago Come on - 2 years, 3 months ago :p haha well i know its a stupid post. - 2 years, 3 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929046034812927, "perplexity": 2780.3790272141277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00857.warc.gz"}
http://math.stackexchange.com/questions/178732/abstract-linear-algebra-trilinear-forms
# Abstract linear algebra, trilinear forms Let $V$ be an 3-dimensional vector space over $\mathbb{R}$. Let $\Lambda^3V^*$ denote the space of alternating trilinear forms on $V$. Note: An alternating trilinear form on $V$ is a map $\omega: V \times V \times V \to \mathbb{R}$ such that $\omega(v_1,v_2,v_3)=-\omega(v_2,v_1,v_3)=-\omega(v_3,v_2,v_1)$ and $\omega(av_1+v_4,v_2,v_3)=a\omega(v_1,v_2,v_3)+\omega(v_4,v_2,v_3)$. (a) Show $\dim(\Lambda^3V^*)$=1. (b) Given $L \in End(V)$ and $0 \neq \omega \in \Lambda^3V^*$ show that the function $\omega_L: V \times V \times V \to \mathbb{R}$ defined by $\omega_L(v_1,v_2,v_3):=\omega(Lv_1,Lv_2,Lv_3), \forall v_1,v_2,v_3 \in V$ is of the form $|L|\omega$ where $|L|$ is a constant which is dependent on $L$ but not on $\omega$. (c) Show that in fact $|L|$ is the determinant of any matrix representing $L$ in terms of a basis of $V$. Is it true that $|LM|=|L||M|$ for any two determinants of endomorphisms $L,M$ of $V$? (d) Is there way to define the trace of $L$ in a manifestly basis independent way similar to the definition of determinant in part (c)? The following parts remain unsolved: (e) We see the trace is a linear invariant of $L$, the determinant is a trilinear invariant of $L$. Now try to construct a bilinear invariant of $L$ along the same lines. (f) If $L$ is diagonalisable, describe the last invariant in terms of eigenvalues of $L$. (g) Given two linearly independent vectors $e_1$ and $e_2$ in $\mathbb{R}^3$, show that any non-zero $\omega \in \Lambda^3V^{*}$ may be used to explicitly construct a vector $e_3$ orthogonal to $e_1$ and $e_2$. Note: For PART(e) I can't think of another invariant of $L$ , based on PART(f) this third invariant is related to eigenvalues of $L$ somehow, but what invariant besides the trace is closely related to the eigenvalues of $L$? - For (a), you are on a good way. Now use linearity, and think about what is $\omega(v,v,w)$. For (b), show that $\omega_L$ is also a skew-symmetric trilinear form, and then think about what it means that the dimension is 1. –  celtschk Aug 4 '12 at 10:32 (a)-(d): yes. I recommend any introductory textbook on the multilinear algebra. –  vesszabo Aug 4 '12 at 11:09 Hint: For (d): define $\omega_L(v_1,v_2,v_3)=Lv_1\wedge v_2\wedge v_3+v_1\wedge Lv_2\wedge v_3+v_1\wedge v_2\wedge Lv_3$. For (g): define $T_\omega:\mathbb{R}^2\to\mathbb{R}^\ast$ by $T_\omega(v_1,v_2)=\omega(v_1,v_2,\cdot)$, that is $T_\omega:\mathbb{R}^2\to\mathbb{R}^\ast$ by $T_\omega(v_1,v_2)v=\omega(v_1,v_2,v)$. –  Yuki Aug 4 '12 at 13:15 @user31899: oopss... sorry!!! for (d): $\omega_L(v_1,v_2,v_3)=\omega(Lv_1,v_2,v_3)+\omega(v_1,Lv_2,v_3)+\omega(v_1,v_2,‌​Lv_3)$... no idea why I wrote $v_1\wedge v_2\wedge v_3$... sorry!!! =p Try to write $\omega_l(v_1,v_2,v_3)=k\omega(v_1,v_2,v_3)$, for some $k$. –  Yuki Aug 4 '12 at 22:56 @user31899: $T_\omega:(\mathbb{R}^3)^2\to(\mathbb{R}^3)^\ast$ (sorry, typo) is for PART(g), how can you canonically "move" from $(\mathbb{R}^3)^\ast$ to $\mathbb{R}^3$? For (e): you can do anything analogous you did in (b) ($\omega_L(v_1,v_2,v_3):=\omega(Lv_1,Lv_2,Lv_3)$), or in (c) ($\omega_L(v_1,v_2,v_3):=\omega(Lv_1,v_2,v_3)+\omega(v_1,Lv_2,v_3)+\omega(v_1,v_‌​2,Lv_3)$) (try to "put $L$" in "two entries", and sum all the possibilities). For (f), (solve (e)) and note that "determinant" is the product of the eigenvalues and "trace" is its sum. –  Yuki Aug 5 '12 at 1:32 About part e), you know both the trace and the determinant are coefficients of the characteristic polynomial of $L$, which in this case has degree $3= \dim V$. There is one coefficient you have not used yet, and perhaps this is the bilinear invariant you are looking for. Let $L$ be the matrix of the operator L, based on the explicit formulas in PART(a) one has $\omega_L(v_1,v_2,v_3)=\omega(Lv_1,Lv_2,Lv_3)$ $=$ $-\det(Lv_1,Lv_2,Lv_3).\omega(a,b,c)$ $=$ $-\det(L).\det(v_1,v_2,v_3).k_\omega$ $=$ $-\det(L).\omega(v_1,v_2,v_3)$. Therefore |L|=$\det(L)$ which is clearly independent of $\omega$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797962307929993, "perplexity": 288.8132580375874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093899.18/warc/CC-MAIN-20150627031813-00205-ip-10-179-60-89.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/334243/a-more-general-potential-energy
# A More General Potential Energy It occurred to me this morning that the notion of work and of spatial potential energy can be generalized to a more abstract form. In particular, work can be defined in terms of an abstract force acting along some curve $S$ such that $$W = \int\limits_S \mathbf{F} \cdot d\mathbf{S}$$ Note: The intrinsic dimension of the curve itself is independent of the dimension in which it is embedded. For example, the image of a map $F: R^n \to R^m$ is still an $n$ dimensional object even though it is embedded in dimension $m$. From the first few pages of my statistical mechanics book, some examples of abstract forces are $dW = H\cdot d\vec{\mathcal{M}}$ for the work done in increasing the magnetisation of a magnet by $d\mathcal{M}$. Another example is the abstract work done by pushing $dN$ oxygen molecules into a bucket of water; $dW = \mu dN$, where $\mu$ is the chemical potential. In all these cases, a new potential energy concept is induced by the definition of an abstract work and must obey conservation of energy because in their most simple form these new forms of work boil down to $\mathbf{F} \cdot d\mathbf{x}$ Now if you imagine that the positions of all objects at all moments in time is a single mathematical object, we now have $4$ dimensions to work with. I am therefore curious if there is ever defined a potential energy with respect to time? If so, where is it relevant? EDIT: we are dealing with conservative forces • see wiki article on 4-force – By Symmetry May 19 '17 at 17:12 • Awesome! that's precisely what I was looking for – theideasmith May 19 '17 at 17:27 Sure. Think of a time dependent potential $V(t)$. You don't even really need to think relativistically to see it - imagine an electric field that increases according to $\textbf{E}(t) = \vec{E_0}t$ Then your potential energy would depend not only on $\vec{r}$ but also on t; $\Phi = \Phi(x, y, z, t)$ and $V(x, y, z, t) = q\Phi(x, y, z, t)$ If a charge were taped to the position $(0, 0, 1, t)$, it would take work to get it from $t = 0$ to $t = 5$. How things behave when they're not taped to a position is described beautifully by the Euler-Lagrange equations, a consequence of the minimization of Action $\delta S = 0$. • Just to clarify because my intro physics class hasn't covered potential energy wrt time; when you have a potential energy $U(x, t)$, would the power exerted by whatever is changing the electric field be $P = \frac{\partial}{\partial t} U(x,t)$ – theideasmith May 23 '17 at 12:55 Path independent differential are exact. Those which depend on a path are inexact, which often occur in thermodynamics. For the kinetic energy case above this is exact. For thermodynamics this is path dependent, such as with the $dW~=~\vec H\cdot d\vec M$ which describes path dependent hysteresis. In this case the area enclosed by the curve is related to entropy. For a conservative force any loop integral is zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954640805721283, "perplexity": 253.8733465522354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00016.warc.gz"}
http://tex.stackexchange.com/questions/145808/how-can-i-wipe-texs-memory-of-a-user-defined-control-sequence/145820
How can I wipe TeX's memory of a user-defined control sequence? I know I can redefine a control sequence to be empty or \relax, as in my example below. However, doing so doesn't prevent one from using that control sequence , because it is still defined. To add some safeguards in my code, I'd like to wipe TeX's memory of a user-defined control sequence, so that, if I subsequently use that control sequence, I get the usual Undefined control sequence error. How can I do that? \documentclass{article} \begin{document} \newcommand\foo{foo} \renewcommand\foo{} \foo % no error \end{document} - \let\foo\undefined – Bordaigorl Nov 19 '13 at 14:52 So simple, yet I didn't know about it. Thanks. Let's wait to see if my question is a duplicate. Otherwise, you should post an answer. – Jubobs Nov 19 '13 at 14:54 related to (but not a duplicate of) tex.stackexchange.com/questions/12339/… – Jubobs Nov 19 '13 at 14:55 The trick is to bind the macro to something which is not defined. Traditionally, the macro \undefined is left undefined (!) so you can simply do \let\foo\undefined If, for some (probably wrong) reason you or some package defines \undefined then this trick won't work. In any case, if you later want to manually check whether \foo is defined or not, you can use LaTeX's \@ifundefined test which will detect whether \foo is undefined OR \relax. Yes, there's no \undef primitive. There's \undef in etoolbox, but it relies on the same method: it does \let\foo\etb@undefined, trusting that \etb@undefined remains globally undefined. – egreg Nov 19 '13 at 18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690767288208008, "perplexity": 1452.060702341168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453656.76/warc/CC-MAIN-20151124205413-00288-ip-10-71-132-137.ec2.internal.warc.gz"}
https://schoolnotes.xyz/courses/physics-hsc/physics-module-5/
## Projectile Motion A projectile is an object that cannot move by itself, moving freely under the force of gravity. • There is no other force other than gravitational force acting on the object • Net force of the projectile is gravitational force (mg), weight • Objects experience: • Vertically downward force of 9.81 N/kg • Vertically downward acceleration of 9.81 m/s2 The projectile moves in a parabolic arc as the vertical gravitational force causes it to deviate from its otherwise linear path. According to Newton’s 2nd law (F = ma), the projectile’s horizontal component has constant velocity (no acceleration) as there is no external force acting on it (the horizontal component is independent of the vertical component), it remains moving due to inertia (Newton’s 1st Law). Force is always downwards, but direction of motion varies. Whether an object is projected or just dropped, it will still fall at the same rate Assumptions of our model • Constant vertical acceleration due to gravity • Curvature of Earth ignored • No air resistance ### Analysing Projectile Motion Projectile motion is 2D motion. To analyse: 1. Break up motion into vertical and horizontal components 2. The two components are completely independent of each other thus, they can be treated separately 3. There is no acceleration in the horizontal direction as gravity only acts vertically 4. Find vertical and horizontal components • Y components: $u\sinθ$,$v\sinθ$, a = 9.81ms-2, vertical height • X components: $u\cosθ$, $v\cosθ$, a = 0, range 5. Find time, as time is the same for both components X DIrection Y Direction vx = ux where v is x component of final velocity, u is x component of initial velocity (m/s) vy2 = uy2+ 2ay∆sy where v is y component of final velocity, u is y component of initial velocity (m/s), a is vertical acceleration of 9.81ms-2, s is y component of displacement ∆sx = uxt where s is x component of displacement (m), t is time (s) ∆sy = uyt +1/2 ayt2 where u is y component of initial velocity (m/s), a is vertical acceleration of 9.81ms-2, s is y component of displacement, t is time (s) vy = uy + ayt where v is y component of final velocity, u is y component of initial velocity (m/s), a is vertical acceleration of 9.81ms-2, t is time (s) ### Special cases • If the projectile launches and lands at the same height, the initial and final angle is the same magnitude – only one is an angle of elevation and the other an angle of depression. • If there is no angle of launch, velocity in the y direction is zero while velocity in the x direction is the same as initial velocity ### Types of Questions #### No initial vertical velocity component/half-flight/no angle given • Initial velocity = horizontal velocity = constant • Vertical velocity is initially zero but increases as object falls (vy = uy + at) #### No initial vertical displacement given • Initial horizontal velocity =ucosθ = constant • Initial vertical velocity = usinθ • Sy = 0 • Vertical velocity at max. height = 0 • Launch angle = landing angle • Angle of projection for max. range = 45o #### Projectile fired at an angle above ground • Initial horizontal velocity =ucosθ = constant • Initial vertical velocity = usinθ • Vertical velocity at max. height = 0 • Angle of projection for max. range = 45o #### Half flight projectile motion • Downward acceleration = gravity • Vertical velocity increases constantly • Horizontal velocity is a constant #### Find the… • time of flight (half-flight and full-flight) • Influenced by: • Initial vertical velocity ONLY IF LAUNCH ANGLE > 0 – increase in initial velocity $\rightarrow$ longer time of flight (direct) • Launch angle – higher launch angle $\rightarrow$ longer time of flight (direct) • Vertical displacement – higher vertical displacement $\rightarrow$ longer time of flight (direct) • initial velocity (velocity immediately after launch) • launch angle • maximum height • Launch angle – higher launch angle $\rightarrow$ higher maximum height (direct) • Initial vertical velocity i.e.: faster it is projected upwards the higher it goes, regardless of the horizontal motion • Vertical displacement – higher vertical displacement $\rightarrow$ higher maximum height (direct) • final velocity (velocity just before it hits ground) • velocity, height, distance at a point in time • launch height • horizontal range of the projectile • Influenced by • Launch angle – increase in angle $\rightarrow$ increase in range UP TO 45o THEN increase in angle $\rightarrow$ decrease in range • Initial velocity – increase in initial velocity $\rightarrow$ increase in range (direct) • Vertical displacement – increase in vertical displacement $\rightarrow$ increase in range (direct) • The trajectory of a projectile is determined by its initial velocity (direct relationships) and forces that act on it as well as air resistance. • y = usinθ $\rightarrow$ both are dependent on initial velocity and angle between projectile and the ground. • x = ucosθ The maximum height for a projectile is the highest point of the arc given by: launch height – usin2x/2g Flight time for a projectile with same launch and landing height is given by: - 2usinx/g and final velocity will be equal to initial velocity ## Charged Particles Projected into Electric Fields Similarities to projectile dropped in G field Differences to projectile dropped in G field If dropped, accelerates uniformly parallel to field linesIf projected, will follow parabolic trajectories in an uniform field Different masses will accelerate at different rates unlike in a gravitational field where all masses accelerate at the same ratePositive and negative charges will accelerate in opposite directionsExact curve of a projectile in an electric field is dependent on velocity, charge and mass, unlike in a gravitational field where all masses if launched exactly the same will follow the same curve ### Factors that influence Projectile Motion • Range • Higher initial velocity $\rightarrow$ larger range (direct) • Larger charge $\rightarrow$ smaller range (inverse) • Larger mass $\rightarrow$ larger range/less deflection (direct) • Higher voltage $\rightarrow$ smaller range (inverse • Acceleration • Larger charge $\rightarrow$ greater acceleration (direct) • Smaller mass $\rightarrow$ greater acceleration (inverse) • Higher voltage $\rightarrow$ greater acceleration (direct) Like projectiles, y and x velocity components can be found. Equations of motion can be used. $E = V/d$ (capital V is voltage) distance between plates? ## Circular Motion An object travelling a circular pathway at constant tangential speed is undergoing uniform circular motion (UCM). ### Characteristics of UCM • Motion along circular path of radius r • Tangential speed v is constant hence, period T is constant • Angular velocity (rate of change of angle) 𝜔 is constant • Linear velocity is not constant as direction is continually changing. Linear velocity is perpendicular to net force of the object’s rotation • Centripetal acceleration (ac) is directed towards the centre • Net force (Fc) towards the centre of the circle The net force of an object moving in UCM is directed towards the centre and is called centripetal force and is given by: Fc = mv2/r where Fc is centripetal force (N), m is mass of object in circular motion (kg), v is velocity/orbital velocity of the object (m/s), r is radius of the circle (m) ### Factors that influence circular motion • Centripetal force • Orbital velocity • There is a direct square relationship. Increase in orbital velocity $\rightarrow$ squared increase in centripetal force acting on/required to keep object in UCM increases as well • Mass • Mass increase $\rightarrow$ centripetal needed increases (direct) • Increase in radius $\rightarrow$ decrease in centripetal force needed/acting on it (inverse) ### Formulae • T = 1/f where T is period (time to complete one revolution) (s), f is frequency (Hz) • 𝜔 • 𝜔 = ∆θ/∆t where 𝜔 is angular velocity (rad/s), θ is angle (rad), t is time (s) • 𝜔 = 2𝜋f where f is frequency (Hz) • 𝜔 = ∆s/rt where s is arc length (m) • 𝜔 = v/r where v linear velocity (m/s), r is radius (m) • v = 2𝜋rf = r𝜔 = 2πr/T • v = $\frac{\sqrt{GM}}{r}$ where v is orbital velocity (m/s), G is universal gravitational constant (6.67 x 10-11 Nm2kg-2) (average amount of gravity experienced in the universe) , M is mass of central object (kg), r is orbital radius (m) NOT ON FORMULA SHEET • ac = r𝜔2 = v2/r (derived from F = ma and Fc = mv2/r) • For satellites: • Fc = mv2/r = mr𝜔2 = mac where m is mass (kg) • For satellites: • Fc = GMm/r2 where m is mass of spinning object (kg) ## Conditions for an Object to Execute Circular Motion As the object is accelerating, there must be a net force acting on the object given by Newton’s 2nd Law (F=ma) directed towards the centre. Fnet = Fc = mv2/r Examples: Situation Force providing Fc Condition Car driving around a horizontal circular bend Friction between the tires and the road f f =mv2/r Ball swinging on a string Tension in the string towards axis of rotation T T = mv2/r Satellite orbiting a planet Gravitational force between satellite and planet FG Fg = mv2/r If the centripetal force and details of the motion do not satisfy Fc = mv2/r, then the object **will not follow UCM **and instead follow a different path: Situation Condition Motion Car driving around a horizontal circular bend Road is slippery, there is not enough friction f < mv2/r Car slides out of the turn and travels on a linear pathway (tangent to the circle) d/t inertia Ball swinging on a string String is cut so no tension in the string, T = 0 < mv2/r Ball flies off and travels on a linear pathway (tangent to the circle) d/t inertia Satellite orbiting a planet Satellites motion does not satisfy conditions of circular motion, Fg ≠ mv2/r Satellite follows elliptical orbit The centripetal force is always perpendicular to the direction of velocity and has constant strength despite changing in direction. The popular term ‘centrifugal force’ actually refers to the force of inertia the spinning object exerts on the wall or rope as per Newton’s 1st Law which feels like an outward force. Acceleration and force occur in the same direction. Centripetal acceleration occurs due to a constantly changing direction NOT because of the velocity, as magnitude of the velocity does not change. RPM $\rightarrow$ rad/s by converting minutes to seconds by dividing by 60 and converting revolutions to radians by multiplying by 2π km/hr $\rightarrow$ m/s by dividing by 3.6 ### Analyse forces on an object executing uniform circular motion 1. Identify forces acting on the object. Draw a free body diagram. 2. Determine direction of acceleration. For UCM, acceleration is always towards centre of the circle 3. Decompose the forces into parallel and perpendicular components (to acceleration) 4. Apple Newton’s 2nd Law to components and find the unknown 1. In the direction of acceleration there is a net force: Fnet = mv2/r 2. Perpendicular to acceleration, a = 0 so Fnet = 0 3. Often find ac first Uniform circular motion can be applied to different systems. Three common systems are: car moving around a corner on a flat and banked road and mass on a string (conical pendulum). #### Cars moving around horizontal circular bends Friction supplies the centripetal force to make a car go around a bend on a flat surface thus, f = Fc = mv2/r. The normal force and friction force are on all 4 tires. Therefore, when a car corners on a flat road we can model the bend as part of a circle. ##### Forces acting • Lateral frictional force between road surface and tires (f) • Normal force (N) • Weight force (w) The car does not accelerate vertically up thus, N = w. Lateral frictional force can be influenced by turning the steering wheel which causes the front wheels of the car to angle. ##### Possible Situations Fc Motion F = mv2/r UCM F > mv2/r Car moves towards the centre of the circle. Radius decreases so car turns more sharply. F < mv2/r Car moves away from the centre of the circle. Radius increases so car turns more gently. #### Why are we learning this? The ability of the car (driver) to turn a corner depends on how sharp it is (r) and how fast the car is travelling (can be controlled). Slowing down, we can turn a sharper corner (smaller radius). Due to a direct square relationship, increasing velocity by x1 results in x4 of the Fc needed to keep the body in UCM. Thus, the faster the car is going, the greater the frictional force required. There is a maximum frictional force that the road can exert on tires thus, slowing down is vital. RMS indicates suggested speeds for corners. Friction provided reduces when there is water, oil or worn tires. #### Cars on banked tracks (ignore friction) A banked road is a road that is tilted into the centre of the turn or circular path. This results in a net force that accelerates the car in the direction of the corner, helping vehicles travel at higher speeds around corners without skidding. Since the car is moving around a corner, we can model this as an arc of a circle thus, Fnet = Fc. This situation looks similar but, is different to an inclined plane question in that the normal force is the HYPOTENUSE (mg = Ncosθ) rather than a vertical side (N = mgcosθ) i.e. the triangles are different. #### Why is this? On an inclined plane as the angle increases, the normal force decreases as more and more of the weight is supported by friction. But, on a banked curve as the slope increases, the normal force needed increases as the centripetal force increases e.g. a racetrack this corresponds to the steepest banked curves being at the sharpest/tightest corners. The sharper the corner the more centripetal force is required to make the turn, requiring more banking and more normal force. #### Forces acting • Normal force (N) - tilted towards centre and component towards centre contributes to Fc • Weight force (w) If the car is turning at the design speed, the horizontal component of the normal force provides Fc rather than friction (f) ### Design Speed Design speed is the speed required for the car to not slide up or down the banked road. It requires a balance between forces up the bank and forces towards the centre. If the speed is too high then the car will start moving up the bank. If the car is too slow it will slide towards the centre. It is given by: √(rgtanθ) NOT ON FORMULA SHEET ### Conical Motion The string is at an angle θ from the vertical. The mass swings in a circular trajectory, drawing a circle with radius (r) at distance (h) below the mount. The horizontal component of tension provides the centripetal force. #### Forces acting • Tension of the string T = mv2/r • Weight force (w = mg) down #### Effects of increasing tension • Vertical component of T remains constant $\rightarrow$balances downward weight force $\rightarrow$ angle increases (direct) • Horizontal component increases $\rightarrow$ centripetal force increases $\rightarrow$ angle increases (direct) #### Total energy and work done on object undergoing UCM Kinetic energy (K) = ½ mv2 When an object is undergoing UCM its magnitude of velocity (v) is constant. Thus, kinetic energy is also constant. ## Non-Uniform Circular Motion Apparent weight is equal to the normal force (N or Fn) acting on you. Due to Newton’s 3rd Law where each force has an equal and opposite reactionary force, the bigger the normal force the heavier you feel. Using Newton’s 2nd law: Fnet = mac = mv2/r thus, net force is providing centripetal force Thus, at the top of the dip you feel lighter as the normal force (N or FN) on you has decreased and at the bottom you _feel _heavier as normal force increases. #### Loop de Loop As per Newton’s 2nd law, Fnet = ma thus, in the radial direction: Fnet = mv2/r At the top, both mg and N are acting in the same direction. Top: Fnet = mg + N = mv2/r = mg + N $\rightarrow$ N = mv2/r – mg thus, you feel lighter Bottom: Fnet = N – mg = mv2/r = N – mg $\rightarrow$ N = mv2/r + mg thus you feel heavier ## Mechanical Energy Mechanical energy = kinetic energy (K) + potential energy (U) Mechanical energy is always conserved unless work is done by an external force. Sometimes energy is transformed light, heat, sound. K before + U before = K after + U after ## Work Work is the transfer of energy from one object to another or the transformation of energy from one form to another. A force does work on an object when it causes a displacement in the direction of the force. W = Fs (if force parallel) = Fscosθ (force and displacement vectors) where s is displacement )m), F is force (N) and θ is angle between force and displacement ## Torque To make an object rotate a torque (τ) needs to be applied (UCM). A force acts to provide this turning effect. A torque is due to a force acting on an object at a distance (r) from the pivot point (axis of rotation). τ = rf (turning point to end of lever) or rfsinθ where r is lever arm length (m), f is force (N) and θ is angle between level arm and force applied Unit is Nm, newton metres, theta is angle between force and the lever An object can orbit (external axis) i.e. earth orbiting the sun or spin (internal axis) i.e. earth spinning on axis Torque is proportional to and causes angular acceleration in rotational motion: $\color{orange}{\frac{\Delta\omega}{t}\propto\tau}$ Torque is applied whenever there is a force acting tangentially to rotational motion: • Torque will increase angular velocity if tangential component of the force is in the **same direction **as velocity • Torque will decrease angular velocity if tangential component of the force is in the opposite direction as velocity An object in rotational equilibrium has no net external torque. It may mean that the object is not rotating or rotating at constant angular velocity. ## Motion in Gravitational Fields ##### IQ: How does the force of gravity determine the motion of planets and satellites? A gravitational field is an area or region where an object with mass experiences a force of attraction towards a larger mass. Earth’s gravitational field strength changes with radius: g = 1/r2 The Earth’s gravitational field is given by: g = GME/rE2 where g is gravitational field strength, G is universal gravitational constant, ME is mass of the Earth, r is distance from the centre of the earth, rE is radius of the earth. ##### Derivation Fg = mg g =fg/m = GMm/r2 x 1/m = GM/r2 therefore, g = GM/r2 note: when a question specifies altitude, to find radius (r) you must add the radius of the earth. ### Factors affecting gravitational field strength • The larger the mass of the planet (M), the greater the gravitational force (direct) • The larger the r, the smaller the g (inverse square) decreases by a factor of 2 Acceleration due to gravity is equal is magnitude of gravitational field strength. ## Newton’s Law of Universal Gravitation Newton’s Law of Universal Gravitation allows us to calculate the amount of gravitational attraction between two objects of mass. It states that every object in the universe attracts another object with a force directly proportional to the product of their masses and inversely proportional to the square of the distance from their centres. It is given by: F = GMm/r2 where F is gravitational force (N), G is the universal gravitational constant (6.67 x 10 -11 Nm2 kg-2), M is the mass of the central object (kg), m is the mass of orbiting object (kg), r is radius (distance of separation centre to centre, radius of orbit) (m) ## Orbital motion of Planets and Satellites ### Kepler’s Law of Planetary Motion Johannes Kepler student of Tycho Brahe (royal astronomer to King of Denmark) formed three empirical laws from years of recording and observing the trajectories of planets and stars. #### Law of Ellipses Each planet moves in an ellipse (oval) with the sun at the foci (sun is closer to one side) causing, summer and winter. Kepler identified the orbits of satellites as slightly elliptical. #### Law of Areas The radius line of each planet sweeps out equal area in equal time. Planets as they travel behind the sun travel slightly faster as they are covering a longer arc length whereas, planets travel slower when further away from the sun as they are covering a shorter arc length. This is due to the sun not being centred. The time to travel from Q to P = time to travel from S to R therefore, area QOP = area ROS. #### Law of Periods Kepler’s third law was calculated in 1619 from observations of planetary motion by Tycho Brahe. Kepler found there is a relationship between period (T) of a satellite’s orbit and its radius (r). The square of the period (T) of the planet is proportional to the cube of their average distance (r) from the sun as distance varies due to elliptical orbit. The law is quantified by: r3/T2 = GM/4π2 where r is orbital radius centre to centre (m), T is period of orbit (time for planet to go around once (s), G is gravitational constant, M is mass of central object (kg). Asserts that the ratio r3/T2 = k is the same for all planets. 4π2/GM is a constant for all satellites orbiting around mass M T2earth/r3earth is proportional to T2mars/r3mars #### Applications The orbital motion of planets and artificial satellites (launched by humans, orbiting larger mass eg: GPS satellite) can be modelled and explained using gravitational fields. We can calculate star masses, orbital velocity or orbital period of these planets and artificial satellites. To answer these questions, we combine UCM with Newton’s Law of Universal Gravitation. In orbits, gravitational force provides the centripetal force hence: F = GMm/r2 = mv2/r = 4π2rm/T2 = mg link between them Thus, GMm/r2 = mv2/r GMmr/r2 = mv2 v2 = GMmr/mr2 v2 = GM/r v = √GM/r which can be used to find centripetal acceleration: ac = v2/r = GM/r2 or to calculate mass (M) of stars from orbital period (T) and radius of planets orbiting a star (r) as 2πr/T gives linear velocity. #### Mass of central object (M): from Kepler’s Third Law in kg r3/T2 = GM/4π2, where M = 4π2r3/GT2 #### Mass of orbiting object (m) F = GMm/r2 = mv2/r, m = Fcr/v2 or Fcr2/GM $\rightarrow$ if you don’t have orbital velocity #### Orbital period (T) r3/T2 = GM/4π2 T = √4π2r3/GM or 2πr/v r3/T2 = GM/4π2 r = ∛GMT2/4π2, where r = orbital radius (m) = earth’s radius + altitude #### Orbital velocity (v) v = 2πr/T or v = √GM/r or v = √Fcr/m #### Gravitational Potential Energy in Orbit Potential energy is defined as the work done by an upward external force on an object as it is lowered from one point to another at constant speed. It is given by: U = -GMm/r where U is potential energy (J), G is universal gravitational constant, M is mass of central mass (kg), m is mass of smaller mass (kg), r is distance centre to centre between the two masses (m) Negative means the gravitational force is attractive. If you want to move two objects further apart you have to do positive work aka add energy by applying a force opposite to the field. Gravitational potential energy of two masses is proportional to the product of the masses and inversely proportional to the separation. It is also negative aka attractive. Gravitational potential energy is defined by the work done in moving an object against the gravitational field in moving a mass from surface of earth to a height (h) above. It is given by: U = mgh where m is mass (kg), g is gravitational field strength or acceleration, h is height (m). This is derived from W (j) = Fs (N) = mgs(h) height is displacement. Comparison: U = mgh U = -GMm/r Define work done in moving an object against the gravitational field in moving a mass from surface of earth to a height (h) above The work done by an upward external force inlowering a mass from infinity to a distance (r) fromthe centre of the Earth without acceleration. When is it used? Near the earth’s surface h << rE so g is constant Used for large height changes when r > rE or far from Earth’s surface As r increases… U approaches infinity U approaches 0 When U = 0 At surface of the Earth Infinite distance away from the centre of the Earth ### Total Energy of a Planet in Its Orbit Total energy is equivalent to mechanical energy which is K + U. Kinetic energy of a satellite in orbit is given by: K = ½mv2 = ½m x (√GM/r)2 = GMm/2r Thus, total energy is given by, E = K + U = -GMm/r + GMm/2r = -GMm/2r NOT ON FORMULA SHEET #### Near Earth and Geostationary Orbits Satellites in orbit around the Earth are classified as low, medium or high orbit. 1. Low Orbit (180km – 2000km) altitude Most common satellite orbit (Hubble telescope, 540km or international space station, 400km, spy, military, mapping satellites). • Orbital Period (T)= approx. 90min but 80-120min • Whole of the Earth’s surface can be quickly covered 1. Medium Orbit (2000km – 36000km) altitude Used by global positioning systems (GPS) • Orbital Period (T) = approx. 3hrs but 3-22hrs 1. High Orbit (36000+ km) Used by communications satellites, eg: Optus, deep space weather imaging etc… A Geostationary satellite has a period (T) of 24hrs (with the Earth thus, ‘stationary’ as it stays above the same point on Earth’s surface if at equator). Used for communications eg: satellite phones, TV. Geo-synchronous is when satellite spins as same rate of Earth’s spin. Thus, it has the same rotational period but, orbit may not be perfectly circular and may have an orbital inclination. A Geostationary satellite is a special case of a geo-synchronous satellite where the orbit is circular and orbital inclination is 0. ## Energy Changes that Occur when Satellites Moves Between Orbits When an object moves from a high orbit to a lower orbit, it moves through an increasing G field strength as the gravitational force (g or Fg) on the object increases as it approaches Earth. The change in gravitational U is given by: ∆U = Ufinal – Uinitial in joules. When one object moves within the gravitational field of a second object: Moves with the field Moves against the field Isolated Work is done by the fieldpotential energy decreases while kinetic energy increases Work is done on the fieldU increases, K decrease Open or Closed Work is done by external agent and by the fieldU decreases, K increases Work done by external agent and on the field,U increases, K either (depends which does more work) Gravitational U is a binding energy. To escape from earth’s gravitational field, given you have mass m you must do work that’s equal or above gravitational U. Work is given by: W = Fs = Fc x s = GMm/r2 x r = GMm/r ### Escape Velocity Escape velocity is when a rocket has enough kinetic energy (K) to escape the Earth’s gravitational field. Escape velocity is the minimum velocity for an object at the surface of Earth to escape to space and not be pulled back. Earth’s escape velocity is 11200 m/s MEM. For a satellite to escape gravitational field, K = U (at a minimum) U = -GMm/r ½mv2 = -GMm/r v2 = -2GM/r Thus, minimum velocity for satellite to make it out alive is given by, vescape = √2GM/r NOT ON FORMULA SHEET where vescape is escape velocity (m/s), M is mass of central body (kg), G is universal gravitational constant, r is orbital radius (centre to centre) (m) note: do not confuse escape velocity with orbital velocity #### Factors that influence vescape • Smaller radius (r) $\rightarrow$ higher escape velocity needed (inverse square root) • Larger central mass (M) $\rightarrow$ higher escape velocity needed (direct square root) • Escape velocity IS INDEPENDENT of mass of the launched object i.e. regardless of how heavy the object is, the escape velocity will be the same for all objects. Satellites are typically launched from close to the equator towards the east (same direction of Earth’s rotation so it can contribute to the kinetic energy of the rocket). Found this post useful? Support us on Patreon.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344711303710938, "perplexity": 1486.37976194271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00682.warc.gz"}
http://www.heldermann.de/JCA/JCA26/JCA261/jca26014.htm
Journal Home Page Cumulative Index List of all Volumes Complete Contentsof this Volume Previous Article Journal of Convex Analysis 26 (2019), No. 1, [final page numbers not yet available]Copyright Heldermann Verlag 2019 On Representing and Hedging Claims for Coherent Risk Measures Saul Jacka Dept. of Statistics, University of Warwick, Coventry CV4 7AL, United Kingdom [email protected] Seb Armstrong Dept. of Statistics, University of Warwick, Coventry CV4 7AL, United Kingdom [email protected] Abdelkarem Berkaoui College of Sciences, Al-Imam Mohammed Ibn Saud Islamic University, P. O. Box 84880, Riyadh 11681, Saudi Arabia [email protected] [Abstract-pdf] \def\cF{\mathcal{F}} We provide a dual characterisation of the weak$^*$-closure of a finite sum of cones in $L^\infty$ adapted to a discrete time filtration $\cF_t$: the $t^{th}$ cone in the sum contains bounded random variables that are $\cF_t$-measurable. Hence we obtain a generalisation of F. Delbaen's m-stability condition [{\it The structure of m-stable sets and in particular of the set of risk neutral measures}, in: In Memoriam Paul-Andr{\'e} Meyer, Springer, Berlin et al. (2006) 215--258] for the problem of reserving in a collection of num\'eraires {\bf V}, called {\bf V}-m-stability, provided these cones arise from acceptance sets of a dynamic coherent measure of risk [see P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath: {\it Thinking coherently}, Risk 10 (1997) 68--71; {\it Coherent measures of risk}, Math. Finance 9(3) (1999) 203--228]. We also prove that {\bf V}-m-stability is equivalent to time-consistency when reserving in portfolios of {\bf V}, which is of particular interest to insurers. Keywords: Coherent risk measures, m-stability, time-consistency, Fatou property, reserving, hedging, representation, pricing mechanism, average value at risk. MSC: 91B24, 46N10, 91B30, 46E30, 91G80, 60E05, 60G99, 90C48. [ Fulltext-pdf  (497  KB)] for subscribers only.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024558424949646, "perplexity": 3301.6092610734245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647280.40/warc/CC-MAIN-20180320033158-20180320053158-00060.warc.gz"}
https://apboardsolutions.in/ap-board-9th-class-maths-solutions-chapter-2-ex-2-4/
# AP Board 9th Class Maths Solutions Chapter 2 Polynomials and Factorisation Ex 2.4 AP State Syllabus AP Board 9th Class Maths Solutions Chapter 2 Polynomials and Factorisation Ex 2.4 Textbook Questions and Answers. ## AP State Syllabus 9th Class Maths Solutions 2nd Lesson Polynomials and Factorisation Exercise 2.4 Question 1. Determine which of the following polynomials has (x + 1) as a factor. i) x3 – x2 – x + 1 Solution: f(- 1) = (- 1)3 – (- 1)2 – (- 1) + 1 = -1 – 1 + 1 + 1 = 0 ∴ (x + 1) is a factor. ii) x4 -x3 +x2 – x + 1 Solution: f(- 1) = (- 1)4 – (- 1)3 + (- 1)2 – (- 1) + 1 = 1 + 1 + 1 + 1 + 1= 5 ∴ (x + 1) is not a factor. iii) x4 + 2x3 + 2x2 + x + 1 Solution: f(- 1) = (-1)4 + 2 (- 1)3 + 2 (- 1)2 + (-1) + 1 = 1 – 2 + 2 – 1 + 1 = 1 ∴ (x + 1) is not a factor. iv) x3 – x2 – (3 – √3)x + √3 Solution: f(- 1) = (- 1)3 – (- 1)2 – (3 – √3)(-1) + √3 = – 1 – 1 + 3 – √3 + √3 = 1 ∴ (x + 1) is not a factor. Question 2. Use the factor theorem to determine whether g(x) is a factor of f(x) in each of the following cases: i) f(x) = 5x3 + x2 – 5x – 1; g(x) = x + 1 [Factor theorem : If f(x) is a polynomial; f(a) = 0 then (x – a) is a factor of f(x); a ∈ R] Solution: g(x) = x+ 1 = x- a say ∴ a = – 1 f(a) = f(- 1) = 5 (- 1)3 + (- 1)2 – 5 (- 1) – 1 = -5 + 1 + 5 – 1 = 0 ∴ x + 1 is a factor of f(x). ii) f(x) = x3 + 3x2 + 3x + 1; g(x) = x + 1 Solution: g(x) = x + 1 = x – a ∴ a = – 1 f(a) = f(- 1) = (- 1)3 + 3 (- 1)2 + 3(-1) + 1 = -1 + 3 – 3 + 1 =0 ∴ f(x) is a factor of g(x). iii) f(x) = x3 – 4x2 + x + 6; g(x) = x – 2 Solution: g(x) = x- 2 = x- a ∴ a = 2 f(a) = f(2) = 23 – 4(2)2 + 2 + 6 = 8 – 16 + 2 + 6 = 0 ∴ g(x) is a factor of f(x). iv) f(x) = 3x3+ x2 – 20x +12; g(x) = 3x – 2 Solution: g(x) = 3x – 2 = $$x-\frac{2}{3}$$ = x – a ∴ a = 2/3 v) f(x) = 4x3+ 20x2+ 33x + 18; g(x) = 2x + 3 Solution: g(x) = 2x + 3 = x + $$\frac{3}{2}=$$ = x – a ∴ a = -3/2 ∴ g(x) is a factor of f(x). Question 3. Show that (x – 2), (x + 3) and (x – 4) are factors of x3 – 3x2 – 10x + 24. Solution: Given f(x) = x3 – 3x2 – 10x + 24 To check whether (x – 2), (x + 3) and (x – 4) are factors of f(x), let f(2), f(- 3) and f(4) f(2) = 23 – 3(2)2 – 10(2) + 24 = 8- 12-20 + 24 = 0 ∴ (x – 2) is a factor of f(x). f(- 3) = (- 3)3 – 3(- 3)2– 10(- 3) + 24 = – 27 – 27 + 30 + 24 = 0 ∴ (x + 3) is a factor of f(x). f(4) = (4)3 – 3 (4)2 – 10 (4) + 24 = 64 – 48 – 40 + 24 = 88 – 88 = 0 ∴ (x – 4) is a factor of f(x). Question 4. Show that (x + 4), (x – 3) and (x – 7) are factors of x3 – 6x2 – 19x + 84. Solution: Let f(x) = x3 – 6x2 – 19x + 84 To verify whether (x + 4), (x – 3) and (x – 7) are factors of f(x) we use factor theorem. Let f(- 4), f(3) and f(7) f(- 4) = (- 4)3 – 6 (- 4)2 – 19 (- 4) + 84 = -64 – 96 + 76 + 84 = 0 . ∴ (x + 4) is a factor of f(x). f(3) = 33 – 6(3)2 – 19(3) + 84 = 27 – 54 – 57 + 84 = 0 ∴ (x – 3) is a factor of f(x). f(7) = 73 – 6(7)2 – 19(7) + 84 = 343 – 294 – 133 + 84 = 427 – 427 = 0 ∴ (x – 7) is a factor of f(x). Question 5. If both (x – 2) and $$\left(x-\frac{1}{2}\right)$$ of px2 + 5x + r, show that p = r. Solution: Let f(x) = px2+ 5x + r As (x – 2) and $$\left(x-\frac{1}{2}\right)$$ are factor of f(x), we have f(2) = 0 and f(1/2) = 0 ∴ f(2) = p(2)2 + 5(2) + r = 4p + 10 + r = 0 = 4p + r = – 10 ………………(1) ⇒ p + 10 + 4r = 0 ⇒ p + 4r = – 10 ………………. (2) From (1) and (2); 4p + r = p + 4r 4p – p = 4r – r 3p = 3r ∴ P = r Question 6. If (x2 – 1) is a factor of ax4 + bx3 + cx2 + dx + e, show that a + c + e = b + d = 0. Solution: Let f(x) = ax4 + bx3 + cx2 + dx + e As (x – 1) is a factor of f(x) we have x2 – 1 = (x + 1) (x – 1) hence f(1) = 0 and f(-1) = 0 f(1) = a + b + c + d + e = 0 ……………. (1) and f(-1) = a- b + c- d + e = 0 ⇒ a + c + e = b + d Substitute this value in equation (1) a + c + e + b + d=0 b + d + b + d=0 2 (b + d) = 0 ⇒ b + d = 0 ∴ a + c + e = b + d = 0 Question 7. Factorise i) x3 – 2x2 – x + 2 Solution: Let f(x) = x3 – 2x2 – x + 2 By trial, we find f(l) = 13 – 2(1)2 – 1 + 2 = 1 – 2 – 1 + 2 = 0 . ∴ (x – 1) is a factor of f(x). [by factor theorem] Now dividing f(x) by (x – 1). f(x) = (x – 1) (x2 – x – 2) = (x – 1) [x2 – 2x + x- 2] = (x – 1) [x (x – 2) + 1 (x – 2)] = (x – 1) (x – 2) (x + 1) ii) x3 – 3x2 – 9x – 5 Solution: Let f(x) = x3 – 3x2 – 9x – 5By trial, f(- 1) = (- 1)3 – 3(- 1)2 – 9(- 1) – 5 =-1 – 3 + 9 – 5 =0 ∴ (x + 1) is a factor of f(x). [ ∵ by factor theorem] Now dividing f(x) by (x + 1). f(x)=(x + 1)(x2 – 4x – 5) But x2– 4x – 5 = x2 – 5x + x – 5 = x (x – 5) + 1 (x – 5) =(x – 5)(x + 1) ∴ f(x)=(x + 1)(x + 1)(x – 5) iii) x3 + 13x2 + 32x + 20 Solution: Let f(x) = x3 + 13x2 + 32x + 20 Let f(- 1) = (- 1)3 + 13 (- 1)2 + 32 (- 1) + 20 = – 1 + 13 – 32 + 20 = 33 – 33 = 0 ∴ (x + 1) is a factor of f(x). [ ∵ by factor theorem] Now dividing f(x) by (x + 1). iv) y3 + y2 – y – 1 Let f(y) = y3 + y2 – y – 1 f(1) = 13+ 12– 1 – 1 = 0 (y – 1) is a factor of f(y). Now dividing f(y) by (y – 1). ∴ f(x) = (x + 1)(x2 + 12x + 20) But (x2 + 12x + 20) = x2+ 10x + 2x + 20 =x(x + 10)+2(x + 10) =(x + 10)(x + 2) ∴f(x) = (x + 1)(x + 2)(x + 10) Question 8. If ax2 + bx + c and bx2 + ax + c have a common factor x + 1 then show that c = 0 and a = b. Solution: Let f(x) = ax2 + bx + c and g(x) = bx2 + ax + c given that (x + 1) is a common factor for both f(x) and g(x). ∴ f(-1) = g(- 1) ⇒a(- 1)2 + b(- 1) + c = b(- 1)2 + a (- 1) + c ⇒ a – b + c = b – a + c ⇒ a + a = b + b ⇒ 2a = 2b ⇒ a = b Also f(- 1) = a – b + c = 0 ⇒ b – b + c = 0 ⇒ c = 0 Question 9. If x2 – x – 6 and x2 + 3x – 18 have a common factor x – a then find the value of a. Solution: Let f(x) = x2 – x – 6 and g(x) = x2 + 3x – 18 Given that (x – a) is a factor of both f(x) and g(x). f(a) = g(a) = 0 ⇒ a2 – a – 6 = a2 + 3a – 18 ⇒ – 4a = – 18 + 6 ⇒ – 4a = – 12 ∴ a = 3 Question 10. If (y – 3) is a factor of y3– 2y2– 9y + 18, then find the other two factors. Solution: Let f(y) = y3– 2y2 – 9y + 18 Given that (y – 3) is a factor of f(y). Dividing f(y) by (y – 3) ∴ f(y) = (y – 3) (y + y – 6) But y2 + y – 6 = y2 + 3y – 2y – 6 = y (y + 3) – 2 (y + 3) = (y + 3) (y – 2) ∴ f(y) = (y – 2)(y – 3)(y + 3) The other two factors are (y – 2) and (y + 3).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241749405860901, "perplexity": 1153.4285955241905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00437.warc.gz"}
https://cs.stackexchange.com/questions/83903/the-directionality-of-reductions
# The 'directionality' of reductions? I've been finding myself a bit confused with the direction of reductions used to show that certain languages are not recursive. For example, let us say we want to determine if the Halting Problem ($HALT_{TM}$)is undecidable. I know we can assume that it is decidable and then try to build a decider for the acceptance problem, which is impossible. But though we are using the Acceptance Problem ($A_{TM}$) to help solve the decidability of the Halting Problem, we have reduced the Acceptance Problem to the Halting Problem, not the other way around, right? I sometimes get a little bit confused when I encounter questions that ask me to deploy a reduction; I will be asked to reduce language $x$ to $y$, but what that means is that $y$ is a simpler instance of a problem of $x$, right (or at least should be)? I'm assuming it's impossible to reduce a simpler version of a problem to a more complex version of a problem, am I right in believing that? • If we believe that it's hard to build a ladder to the moon, then we must also believe that it's at least as hard to build a device that duplicates any object -- since if such a device was easy to build, then we could put a regular ladder in it, attach the resulting two ladders together to produce a double-height ladder, and then repeat this process a few times, and thus easily solve the problem we assumed to be hard. Here I've reduced the problem of building a moon-ladder to the problem of building an object duplicator to show that the latter is at least as hard as the former. Nov 14 '17 at 14:38 • @j_random_hacker That analogy seems rather convoluted and it quickly seems to break down. I could just use a regular ladder factory to get my ladders (so a duplicator is not necessary) and, after I've stuck a few ladders together, the whole construction will be so flimsy that it will collapse under its own weight (so a duplicator is not sufficient). Nov 14 '17 at 15:57 • @DavidRicherby: Granted the structural considerations are a big problem, no argument there :) Ignoring those, though, the fact that a ladder factory suffices is not an issue: to show problem A is at least as hard as problem B it's enough to show all instances of B can be solved by transforming to instances of A, and thus solving A solves B too; the existence of other solutions for B is irrelevant. (Indeed the fact an object duplicator can be used to implement a ladder factory means the "duplicator" problem is at least as hard as the "ladder factory" problem, which seems a valid conclusion!) Nov 14 '17 at 16:31 • Possible duplicate of When problem A reduces to problem B, which problem is more complex? Nov 14 '17 at 16:33 • @DavidRicherby: By writing "to the moon", I guess I drew attention to the flimsiest part of the analogy (namely, the flimsiness of the ladders). And without room to define "hard", I wanted a technique needing "a linear amount of work" or worse (like your ladder factory) to qualify as "hard", and "a logarithmic amount of work" (like my duplicator) to qualify as "easy". Perhaps this analogy doesn't work, but I think some kind of somewhat accurate, somewhat realistic analogy should be possible and helpful. Nov 14 '17 at 17:50 Don't worry – everybody gets confused by the direction of reductions. Even people who've been working in algorithms and complexity for decades occasionally have a, "Wait, were we supposed to be reducing $A$ to $B$ or $B$ to $A$?" moment. Reducing $A$ to $B$ produces a statement of the form "If I could solve $B$, then I'd also know how to solve $A$". "Solve" in this sense could mean "compute using any Turing machine", or "compute in polynomial time" or whatever other notion of solution your context requires. This may seem counterintuitive, since "$A$ reduces to $B$" implies that solving $B$ is at least as hard as solving $A$, so you haven't reduced the difficulty. However, you can think of it as reducing the number of problems you need to solve. Imagine that, at the start of the day, your goals were to find an algorithm for $A$ and an algorithm for $B$. Well, now that you've found a reduction from $A$ to $B$, you've reduced your goals to just finding an algorithm for $B$. • ""If I could solve B, then I'd also know how to solve A"" - Also, if you know solving A is hard, then you also know solving B is at least as hard. Nov 14 '17 at 18:16 • A physicist and a mathematician were each asked to remove two nails, one of them punched all the way into the wall, the other just halfway. The physicist first pulled out the one that was halfway in and then, after toiling some time, managed to pull out the second one. The mathematician started with the one that was all the way in the wall, since it was more interesting. After some considerable time and effort, he managed to get it out. Then he looked at the other one, and uttering the words, "This can be simplified to an already solved case," he punched it all the way into the wall. Nov 15 '17 at 2:44 No. When $A$ reduces to $B$, it intuitively means that $A$ is simpler than $B$, not the other way around. More practically: assume you want to check $x \in A$. Instead of doing that in some way, you transform $x$ according to some (reduction) function $y=f(x)$. You now have to check $y \in B$. You haven't yet solved the initial problem $x \in A$, since now you are left with another problem to solve: $y \in B$. This means that you have reduced the initial problem to another one. It might seem counter-intuitive at first that we reduce easy things to more complex ones. Indeed, pragmatically, why should one wanting to solve an easy task reducing themselves to an harder task? Wouldn't we like to reduce our task to an even easier task, instead? However, the truth is: if we could solve a hard task $A$ reducing it to an easy task $B$, this means that $A$ was not really harder than $B$. Indeed, if solving $A$ can be achieved by applying the reduction function, and then solving the "easier" $B$, it means that $A$ was easier than $B$ in the first place. A reduction from $$A$$ to $$B$$ is something that does part of the work needed to do $$A$$ and what remains to do is $$B$$. For example, "getting food" can be reduced to "cooking" by "getting ingredients". This means that "cooking" is harder than "getting food": Anyone that can "cook" can "get food" (assuming that the reduction "getting ingredients" always works, for example in the presence of a place giving ingredients for free). Drawing things tend to make them easier to understand: You want to build the blue box (which represents an algorithm that takes an input $$x$$ and decides if $$x\in A$$). The reduction is then the red box, and once you have it, the only thing that remains to do in order to build the blue box is building the green box. So if you have a red box (that reduces $$A$$ to $$B$$), building the green box (that decides $$B$$) is at least as hard as building the blue one (that decides $$A$$): If you have a green box it is very easy to build a blue one, while if you build a blue box you may not be able to build a green box. Note that the reasons for the two conditions defining a reduction appear on this drawing: the fact that $$f$$ is computable means that you can build the red box, while the fact that $$f(x)\in B\iff x \in A$$ means that you can connect the two rightmost wires. • I find the first paragraph rather hard to follow (again, I'm not sure the analogy is very enlightening) but a definite +1 for the rest of the answer. Nov 14 '17 at 18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339122533798218, "perplexity": 408.7495327556061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00168.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3871632
## forces of static friction question Then from what you stated wouldn't the force of friction be causing it to move forward but their would have to be some other friction opposing that motion or is that taken into account for Ffs? And isn't static friction a amount ranging to some maximum unroll the object starts to move, so since their is an acceleration their isn't static but only kinetic friction? The friction opposing motion is actually the air resistance. As for your other question, it is static friction. There is no kinetic friction involved unless the wheels start slipping (you're peeling out.) If your wheels are not slipping, the wheels are not sliding across the road. In fact, if there were no acceleration, there is no friction at all, static or kinetic. K yess I think I just clicked the air was opposing it to move and the static friction was actually opposing this in the other direction making it the "applied" force. K I have one last question I think. Even though the Ffs is acting in the opposite direction wouldn't their be a little bit of friction acting in the same direction as air because the tires are touching the ground or is it because of the Ffs the car is able to move forward and their would only be friction in the same direction as Fair if the brakes were on and the tires were not rotating? Tire friction opposes the tire's direction of motion. There is no tire friction that goes in the same direction of motion of the tires. Friction never does that. Kk I'm good now thank you so much for the help I really appreciate it I wasn't thinking about the tires direction I was thinking about the car body itself. Recognitions: Homework Help Quote by chiuda 1. The problem statement, all variables and given/known data a 1450 kg car is towing a trailer of mass 454 kg. The force of air resistance is 7471N[backward]. If the acceleration of both vehicles is 0.225m/s^2[forward], what is the force of static friction on the wheels from the ground? 2. Relevant equations I am able to find Force normal(Fn) by multiplying its total mass by force of gravity because force normal is equal to force gravity (Fg) because the car is neither hovering or sinking in the surface of which it moves across. I can also find Fnet horizontal (Fnet,h) because i have mass and i have acceleration. but even with these values i am still missing the coefficient for static friction and force applied (Fapp) and force of friction static (Ffs) 3. The attempt at a solution Fnet,vert=Fn+(-Fg) 0=Fn-Fg Fn=Mxg Fn=1904kgx9.81m/s^2 Fn=18678N Fnet, horiz= MxA = 1904kgx9.81m/s^2 = 4.4N I then do not really no what to do other than: U(coefficient of static friction) Ffs=UxFn =Ux18678N Fnet, horiz=(-F,air)+(-Ffs)+Fapp 4.4N=-7471-Ffs+Fapp 7475.4=-Ffs+Fapp I am now stuck as you can see and have made many othe attempts in different ways but they all ended up fairly similar, ANY HELP AT ALL WILL BE VERY MUCH APPRECIATED!!!!!:) back to square 1. I think you have forgotten the original question. The two bits I have highlited red show you the Net force. Fnet = ma m = 1904 kg a = 0.225 ms-2 From that you can calculate the nett force. Lets suppose the answer to that is 400N (you can calculate the real value) What makes up the net force? It is usually the sum of a Forward Force and a backwards force. The backwards force is the air resistance - given as 7471N If the net Force is indeed 400N (forward), there must be a forward force of 7841N ( you can again calculate what the real value has to be) acting on the car. What could be supplying that Force? Hint: Unless it is due to a gravitational Field [that is down; not forward], magnetic Field (no) or electric Field (no) it must be a CONTACT force. What is touching the car - or some part of the car? Hint 2: It is not the air touching the car - that has already been taken care of in the air resistance figure of 7471N [backward]. EDIT: Whoops, while I was composing I think you were finding the solution. Tags force, friction, frictional forces, kinetic friction, static fiction
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150990605354309, "perplexity": 692.1646868836583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703334458/warc/CC-MAIN-20130516112214-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4028018/a-puzzling-kkt-for-lmi-vs-scalar-constraint
# A puzzling KKT for LMI vs. scalar constraint I am trying to understand the KKT conditions for LMI constraints in order to solve my original question in KKT conditions for $\max \log \det(X)$ with LMI constraints. In the meantime, I found a much simpler problem that does not go through when extending the KKT conditions from scalar case to the vector case. The problem is \begin{align} & \max_{X\succeq0} \log \det(I + B XB^T)\\ \\ & s.t. \begin{pmatrix} AXA^T - X + Q & AXB^T \\ BXA^T& I + BXB^T \end{pmatrix}\succeq0, \end{align} and the goal is to show $$(A - K(X^\ast)B)(A-K(X^\ast)B )^T\prec I$$ where $$K(X)\triangleq AXB^T(I + BXB^T)^{-1}$$. This is an important consequence in control theory since it implies that the optimal solution $$X^*$$ is a stabilizing solution for the corresponding system. From here, I elaborate on my modest progress. The constraints can be either written as a "big LMI": \begin{align} R(X)&=\begin{pmatrix} AXA^T - X + Q & AXB^T &0\\ BXA^T& I + BXB^T &0 \\ 0&0&X \end{pmatrix}\succeq 0 \end{align} or \begin{align} AXA^T-X+Q - AXB^T(I + BXB^T)^{-1}BXA^T\succeq0\\\ X\succeq0, \end{align} where we used the Schur complement along with $$I+BXB^T\succ0$$. The scalar case: By the KKT stationarity condition \begin{align}\label{eq:lagra_1} 0&= -B^2 -\lambda_1+ \lambda_2\{ 1 - A^2 + 2 A^2B^2X(I + BXB^T)^{-1} - A^2B^4X^2(I + BXB^T)^{-2}\} \end{align} for $$\lambda_1,\lambda_2\ge0$$ and correspond to the constraints. If $$B\neq0$$, it follows that $$\lambda_2>0$$ and \begin{align} 0&<1 - A^2 + 2 A^2B^2X^\ast(I + BX^\ast B^T)^{-1} - A^2B^4X^{2\ast}(I + BX^\ast B^T)^{-2}\\ &= 1 - (A - K(X^\ast)B)^2 \end{align} where $$K(X)$$ is defined above. If $$B=0$$, the stability condition holds only if $$A<1$$. The combination of this conditions lead to the following known result: there exists a stabilizing solution $$X$$ iff $$(A,B)$$ is detectable. The vector case using the Schur complement constraint: Following a great advice from @LinAlg in the comments, I could make the following progress. The Lagrangian is: \begin{align} L(X,\Lambda_1,\Lambda_2)= - \log \det(I + B XB^T) - \text{Tr}(X\Lambda_1) - \text{Tr}((AXA^T-X+Q - AXB^T(I + BXB^T)^{-1}BXA^T) \Lambda_2). \end{align} From the stationarity condition, (is the derivative correct?) \begin{align} 0&=\frac{\partial L(X,\Lambda_1,\Lambda_1)}{\partial X}\\ &= - \text{det}(I+BX B^T)\text{Tr}((I+BX B^T)^{-1}BB^T) - \Lambda_1 \\ &\ \ + (I - A^TA + A^TKB + B^TK^TA - B^TK^TKB) \Lambda_2 \end{align} with $$\Lambda_1,\Lambda_2\succeq0$$. By the primal feasibility constraint $$X\succeq0$$, the first summand is positive so that \begin{align} (I - (A-KB)^T(A-KB))\Lambda_2 \succeq 0 \end{align} The complementary slackness conditions read as: \begin{align} 0&= \Lambda_1X\\ 0&= (AXA^T-X+Q - AXB^T(I + BXB^T)^{-1}BXA^T))\Lambda_2\\ &= [(A-KB)X(A-KB)^T - X + Q + KK^T ]\Lambda_2\\ \end{align} I do not know how to proceed from here, but I try to enlighten when I am aiming to arrive. A necessary condition for the existence of stabilizing solution is detectability, i.e., if $$Ax = \lambda x$$ for a vector $$x$$ and $$|\lambda|\ge1$$ then $$Bx\neq0$$. Let's try to show this fact using contradiction: Assume there exists a vector $$x\neq0$$ such that $$Ax = \lambda x$$ and $$Bx\neq0$$ with $$|\lambda|\ge1$$. We can now pre- and post-multiplying the stationarity condition with $$x^T$$ and $$x$$ and have \begin{align} 0&= - y^Ty -x\Lambda_1 - x^T( )\Lambda_2x \end{align} The vector case using the big LMI: Without loss of generality, the dual variable is \begin{align} Z=\begin{pmatrix} S & U&0\\ U^T & T&0\\ 0&0&W \end{pmatrix}. \end{align} The Lagrangian in this case is: \begin{align} L(X,Z)&= - \log \det(I + B XB^T) - \text{Tr}(R(X)Z). \end{align} The KKT stationarity condition gives: \begin{align} 0&= - \text{det}(I+BX B^T)\text{Tr}((I+BX B^T)^{-1}BB^T) \\& \ \ -\text{Tr}( ASA^T - S + B^TU^TA + A^TUB + B^TTB + W ) \end{align} and the complementary slackness condition $$R(X)Z=0$$ simplifies to: \begin{align} 0&= (AXA^T - X + Q)S + AXB^T U^T\\ 0&= (AXA^T - X + Q)U + AXB^T T\\ 0&= BXA^T S + (I+BXB^T)U^T\\ 0&= BXA^T U + (I+BXB^T)T\\ 0&= XW \end{align} If $$B=I$$, it follows that $$S\succ0$$ (Assume $$Sx=0$$ for some $$x$$ and conclude that $$x=0$$). For the general case, $$Sx=0$$ implies $$Bx=0 \& \& Wx=0$$. • you just add $-\text{Tr}(\Lambda(AXA^T-X+Q - AXB^T(I + BXB^T)^{-1}BXA^T))$ to the Lagrangian Feb 19, 2021 at 3:33 • That is only true if it holds for all $B \succeq 0$, not just for one specific $B$. You can write $B=UU^T$ and use the cyclic property of the trace to simplify the left hand side to $\sum_{i=1}^p u_i^T (I - (A-BK)(A-BK)^T) u_i$. Feb 19, 2021 at 21:21 • A simpler problem was solved by @LinAlg in <math.stackexchange.com/questions/4034735/…>. I am updating my progress, feel free to contribute. Feb 22, 2021 at 7:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 14, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998679518699646, "perplexity": 1169.5524864567992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00307.warc.gz"}
https://richardzach.org/category/papers/page/2/
# Kurt Gödel and computability theory Logical Approaches to Computational Barriers Second Conference on Computability in Europe, CiE 2006, Swansea. Proceedings. LNCS 3988 (Springer, Berlin, 2006) 575-583 Although Kurt Gödel does not figure prominently in the history of computabilty theory, he exerted a significant influence on some of the founders of the field, both through his published work and through personal interaction. In particular, Gödel’s 1931 paper on incompleteness and the methods developed therein were important for the early development of recursive function theory and the lambda calculus at the hands of Church, Kleene, and Rosser. Church and his students studied Gödel 1931, and Gödel taught a seminar at Princeton in 1934. Seen in the historical context, Gödel was an important catalyst for the emergence of computability theory in the mid 1930s. Preprint # Hilbert’s program then and now Dale Jacquette, ed., Philosophy of Logic. Handbook of the Philosophy of Science, vol. 5. (Elsevier, Amsterdam, 2006), 411-447. Hilbert’s program was an ambitious and wide-ranging project in the philosophy and foundations of mathematics. In order to “dispose of the foundational questions in mathematics once and for all,” Hilbert proposed a two-pronged approach in 1921: first, classical mathematics should be formalized in axiomatic systems; second, using only restricted, “finitary” means, one should give proofs of the consistency of these axiomatic systems. Although Gödel’s incompleteness theorems show that the program as originally conceived cannot be carried out, it had many partial successes, and generated important advances in logical theory and metatheory, both at the time and since. The article discusses the historical background and development of Hilbert’s program, its philosophical underpinnings and consequences, and its subsequent development and influences since the 1930s. Preprint # The epsilon calculus and Herbrand complexity Studia Logica 82 (2006) 133-155 (with Georg Moser) Hilbert’s $$\varepsilon$$-calculus is based on an extension of the language of predicate logic by a term-forming operator $$\varepsilon x$$. Two fundamental results about the $$\varepsilon$$-calculus, the first and second epsilon theorem, play a role similar to that which the cut-elimination theorem plays in sequent calculus. In particular, Herbrand’s Theorem is a consequence of the epsilon theorems. The paper investigates the epsilon theorems and the complexity of the elimination procedure underlying their proof, as well as the length of Herbrand disjunctions of existential theorems obtained by this elimination procedure. Review: Mathematical Reviews 2205042 (2006k:03127) (Mitsuru Yasuhara) Preprint # Critical study of Michael Potter’s Reason’s Nearest Kin ## Source Notre Dame Journal of Formal Logic 46 (2005) 503–513 ## Reviewed Work: Michael Potter, Reason’s Nearest Kin. Philosophies of Arithmetic from Kant to Carnap. Oxford University Press, Oxford, 2000. x + 305 pages Preprint # Kurt Gödel, paper on the incompleteness theorems (1931) Ivor Grattan-Guinness, ed., Landmark Writings in Mathematics (Elsevier, Amsterdam, 2005), 917–925 This entry for the Landmark Writings in Mathematics collection discusses Kurt Gödel’s 1931 paper on the incompleteness theorems, with a special emphasis on the historical and philosophical context. Preprint # Decidability of quantified propositional intuitionistic logic and S4 on trees of height and arity ≤ ω Journal of Philosophical Logic 33 (2004) 155–164. Quantified propositional intuitionistic logic is obtained from propositional intuitionistic logic by adding quantifiers $$\forall p$$, $$\exists p$$, where the propositional variables range over upward-closed subsets of the set of worlds in a Kripke structure. If the permitted accessibility relations are arbitrary partial orders, the resulting logic is known to be recursively isomorphic to full second-order logic (Kremer, 1997). It is shown that if the Kripke structures are restricted to trees of at height and width at most $$\omega$$, the resulting logics are decidable. This provides a partial answer to a question by Kremer. The result also transfers to modal S4 and some Gödel-Dummett logics with quantifiers over propositions. Review: M. Yasuhara (Zentralblatt 1054.03011) Preprint # Hilbert’s “Verunglückter Beweis,” the first epsilon theorem, and consistency proofs History and Philosophy of Logic 25(2) (2004) 79–94. In the 1920s, Ackermann and von Neumann, in pursuit of Hilbert’s Programme, were working on consistency proofs for arithmetical systems. One proposed method of giving such proofs is Hilbert’s epsilon-substitution method. There was, however, a second approach which was not reflected in the publications of the Hilbert school in the 1920s, and which is a direct precursor of Hilbert’s first epsilon theorem and a certain ‘general consistency result’ due to Bernays. An analysis of the form of this so-called ‘failed proof’ sheds further light on an interpretation of Hilbert’s Program as an instrumentalist enterprise with the aim of showing that whenever a ‘real’ proposition can be proved by ‘ideal’ means, it can also be proved by ‘real’, finitary means. Review: Dirk Schlimm (Bulletin of Symbolic Logic 11/2 (2005) 247–248) Preprint # The practice of finitism: Epsilon calculus and consistency proofs in Hilbert’s Program Synthese 137 (2003) 211-259. After a brief flirtation with logicism in 1917–1920, David Hilbert proposed his own program in the foundations of mathematics in 1920 and developed it, in concert with collaborators such as Paul Bernays and Wilhelm Ackermann, throughout the 1920s. The two technical pillars of the project were the development of axiomatic systems for ever stronger and more comprehensive areas of mathematics and finitistic proofs of consistency of these systems. Early advances in these areas were made by Hilbert (and Bernays) in a series of lecture courses at the University of Göttingen between 1917 and 1923, and notably in Ackermann’s dissertation of 1924. The main innovation was the invention of the epsilon-calculus, on which Hilbert’s axiom systems were based, and the development of the epsilon-substitution method as a basis for consistency proofs. The paper traces the development of the “simultaneous development of logic and mathematics” through the epsilon-notation and provides an analysis of Ackermann’s consistency proofs for primitive recursive arithmetic and for the first comprehensive mathematical system, the latter using the substitution method. It is striking that these proofs use transfinite induction not dissimilar to that used in Gentzen’s later consistency proof as well as non-primitive recursive definitions, and that these methods were accepted as finitistic at the time. Note: his is a revised version of Chapter 3 of my dissertation. Review: John W. Dawson, Jr (Mathematical Reviews 2038463 (2005b:03008)) DOI: 10.1023/A:1026247421383 (Subscription required) Preprint # Characterization of the axiomatizable prenex fragments of first-order Gödel logics In: 33rd International Symposium on Multiple-valued Logic. Proceedings. Tokyo, May 16-19, 2003 (IEEE Computer Society Press, 2003) 175-180 (with Matthias Baaz and Norbert Preining) The prenex fragments of first-order infinite-valued Gödel logics are classified. It is shown that the prenex Gödel logics characterized by finite and by uncountable subsets of [0, 1] are axiomatizable, and that the prenex fragments of all countably infinite Gödel logics are not axiomatizable. Preprint # Hilbert’s Finitism: Historical, Philosophical, and Metamathematical Perspectives Dissertation, University of California, Berkeley, Spring 2001 Logic for Programming, Artificial Intelligence, and Reasoning. 8th International Conference, LPAR 2001. Proceedings, LNAI 2250. (Springer, Berlin, 2001) 639-653 (with Christian G. Fermüller and Georg Moser) A simple model of dynamic databases is studied from a modal logic perspecitve. A state $$\alpha$$ of a database is an atomic update of a state $$\beta$$ if at most one atomic statement is evaluated differently in $$\alpha$$ compared to $$\beta$$. The corresponding restriction on Kripke-like structures yields so-called update logics. These logics are studied also in a many-valued context. Adequate tableau calculi are given. Preprint # Quantified propositional Gödel logics Voronkov, Andrei, and Michel Parigot (eds.) Logic for Programming and Automated Reasoning. 7th International Conference, LPAR 2000. Proceedings, LNAI 1955 (Springer, Berlin, 2000) 240-256 (with Matthias Baaz and Agata Ciabattoni) It is shown that $$\mathbf{G}^\mathrm{qp}_\uparrow$$, the quantified propositional Gödel logic based on the truth-values set $$V_\uparrow = {1 – 1/n : n = 1, 2, . . .} \cup {1}$$, is decidable. This result is obtained by reduction to Büchi’s theory S1S. An alternative proof based on elimination of quantifiers is also given, which yields both an axiomatization and a characterization of $$\mathbf{G}^\mathrm{qp}_\uparrow$$ as the intersection of all finite-valued quantified propositional Gödel logics. Preprint # Hypersequents and the proof theory of intuitionistic fuzzy logic Clote, Peter G., and Helmut Schwichtenberg (eds.), Computer Science Logic. 14th International Workshop, CSL 2000. Fischbachau, Germany, August 21-26, 2000. Proceedings. (Springer, Berlin, 2000) 187-201 (with Matthias Baaz) Takeuti and Titani have introduced and investigated a logic they called intuitionistic fuzzy logic. This logic is characterized as the first-order Gödel logic based on the truth value set [0, 1]. The logic is known to be axiomatizable, but no deduction system amenable to proof-theoretic, and hence, computational treatment, has been known. Such a system is presented here, based on previous work on hypersequent calculi for propositional Gödel logics by Avron. It is shown that the system is sound and complete, and allows cut-elimination. A question by Takano regarding the eliminability of the Takeuti-Titani density rule is answered affirmatively. Preprint # Completeness before Post: Bernays, Hilbert, and the development of propositional logic Bulletin of Symbolic Logic 5 (1999) 331–366. Some of the most important developments of symbolic logic took place in the 1920s. Foremost among them are the distinction between syntax and semantics and the formulation of questions of completeness and decidability of logical systems. David Hilbert and his students played a very important part in these developments. Their contributions can be traced to unpublished lecture notes and other manuscripts by Hilbert and Bernays dating to the period 1917-1923. The aim of this paper is to describe these results, focussing primarily on propositional logic, and to put them in their historical context. It is argued that truth-value semantics, syntactic (“Post-“) and semantic completeness, decidability, and other results were first obtained by Hilbert and Bernays in 1918, and that Bernays’s role in their discovery and the subsequent development of mathematical logic is much greater than has so far been acknowledged. Note: A slightly revised version constitutes of Chapter 2 of my dissertation. ### Reviews Volker Peckhaus (Zentralblatt für Mathematik 942.03003) Hourya Sinaceur (Mathematical Reviews 1791358) DOI: 10.2307/421184 JSTOR: 421184 Preprint Zach-1999-Completeness-before-Post-Bernays-Hilbert-and-th # Note on generalizing theorems in algebraically closed fields Archive for Mathematical Logic 37 (1998) 297–307 (with Matthias Baaz) The generalization properties of algebraically closed fields $$\mathit{ACF}_p$$ of characteristic $$p > 0$$ and $$\mathit{ACF}_0$$ of characteristic 0 are investigated in the sequent calculus with blocks of quantifiers. It is shown that $$\mathit{ACF}_p$$ admits finite term bases, and $$\mathit{ACF}_0$$ admits term bases with primality constraints. From these results the analogs of Kreisel’s Conjecture for these theories follow: If for some $$k$$, $$A(1 + … + 1)$$ ($$n$$ 1’s) is provable in $$k$$ steps, then $$(\forall x)A(x)$$ is provable. Review: Yehuda Rav (Mathematical Reviews 2000a:03057) Preprint # Labeled calculi and finite-valued logics ## Source Studia Logica 61 (1998) 7–33 (with Matthias Baaz, Christian G. Fermüller, and Gernot Salzer) A general class of labeled sequent calculi is investigated, and necessary and sufficient conditions are given for when such a calculus is sound and complete for a finite­-valued logic if the labels are interpreted as sets of truth values (sets­-as­-signs). Furthermore, it is shown that any finite­-valued logic can be given an axiomatization by such a labeled calculus using arbitrary “systems of signs,” i.e., of sets of truth values, as labels. The number of labels needed is logarithmic in the number of truth values, and it is shown that this bound is tight. Review: Radim Belohlávek (Mathematical Reviews 99m:03043) Preprint # Numbers and functions in Hilbert’s finitism Taiwanese Journal for Philosophy and History of Science 10 (1998) 33–60 (special issue on philosophy of mathematics edited by Charles Chihara) David Hilbert’s finitistic standpoint is a conception of elementary number theory designed to answer the intuitionist doubts regarding the security and certainty of mathematics. Hilbert was unfortunately not exact in delineating what that viewpoint was, and Hilbert himself changed his usage of the term through the 1920s and 30s. The purpose of this paper is to outline what the main problems are in understanding Hilbert and Bernays on this issue, based on some publications by them which have so far received little attention, and on a number of philosophical reconstructions and criticisms of the viewpoint (in particular, by Hand, Kitcher, Parsons, and Tait). Review: Hourya Sinaceur (Mathematical Reviews 2001g:01033) Preprint # Compact propositional Gödel logics ## Source 28th International Symposium on Multiple Valued Logic. May 1998, Fukuoka, Japan. Proceedings (IEEE Computer Society Press, Los Alamitos, 1998) 108-113 (with Matthias Baaz) ## Abstract Entailment in propositional Gödel logics can be defined in a natural way. While all infinite sets of truth values yield the same sets of tautologies, the entailment relations differ. It is shown that there is a rich structure of infinite-valued Gödel logics, only one of which is compact. It is also shown that the compact infinite-valued Gödel logic is the only one which interpolates, and the only one with an r.e. entailment relation. Preprint # Incompleteness of an infinite-valued first-order Gödel logic and of some temporal logics of programs In: Computer Science Logic. 9th Workshop, CSL’95. Selected Papers (Springer, Berlin, 1996) 1-15 (with Matthias Baaz and Alexander Leitsch) It is shown that the infinite-valued first-order Gödel logic G0 based on the set of truth values $$\{0\} \cup \{1/k : k = 1, 2, 3, \dots\}$$ is not r.e. The logic G0 is the same as that obtained from the Kripke semantics for first-order intuitionistic logic with constant domains and where the order structure of the model is linear. From this, the unaxiomatizability of Kröger’s temporal logic of programs (even of the fragment without the nexttime operator) and of the authors’ temporal logic of linear discrete time with gaps follows. Preprint # Generalizing theorems in real closed fields Annals of Pure and Applied Logic 75 (1995) 3–23 (with Matthias Baaz) Jan Krajicek posed the following problem: Is there is a generalization result in the theory of real closed fields of the form: If $$A(1 + \dots + 1)$$ ($$n$$ occurrences of 1) is provable in length $$k$$ for all $$n = 0, 1, 2, \dots$$, then $$(\forall x)A(x)$$ is provable? It is argued that the answer to this question depends on the particular formulation of the “theory of real closed fields.” Four distinct formulations are investigated with respect to their generalization behavior. It is shown that there is a positive answer to Krajicek’s question for (1) the axiom system RCF of Artin-Schreier with Gentzen’s LK as underlying logical calculus, (2) RCF with the variant LKB of LK allowing introduction of several quantifiers of the same type in one step, (3) LKB and the first-order schemata corresponding to Dedekind cuts and the supremum principle. A negative answer is given for (4) any system containing the schema of extensionality. Preprint
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527364134788513, "perplexity": 1453.0899777516609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00592.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=EBook_Problems_Infer_2Means_Dep&oldid=8459
# EBook Problems Infer 2Means Dep (diff) ← Older revision | Current revision (diff) | Newer revision → (diff) ## EBook Problems Set - Inferences About Two Means: Dependent Samples Problems ### Problem 1 If we want to estimate the mean difference in scores on a pre-test and post-test for a sample of students, how should we proceed? (a) We should calculate a z or a t statistic. (b) We should collect one sample, two samples, or conduct a paired data procedure. (c) We should construct a confidence interval or conduct a hypothesis test.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868501782417297, "perplexity": 4792.494638425788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00735.warc.gz"}
http://mathhelpforum.com/advanced-algebra/170513-find-basis.html
1. ## find the basis Let L be the line spanned by v_1=[1, 1, 1]. Find a basis {v_2, v_3} for the plane perpendicular to L, and verify that B= {v_1,, v_2, v_3} is a basis for R3. Let ProjL denote the projection onto the line L. Find the matrix B for ProjL with respect to the basis B. Can you just help me get started with the problem. I'll do the rest on my own. 2. Originally Posted by Taurus3 Let L be the line spanned by v_1=[1, 1, 1]. Find a basis {v_2, v_3} for the plane perpendicular to L, and verify that B= {v_1,, v_2, v_3} is a basis for R3. Let ProjL denote the projection onto the line L. Find the matrix B for ProjL with respect to the basis B. Can you just help me get started with the problem. I'll do the rest on my own. Since $v_1$ is perpendicular to the plane you want to span it is the planes normal vector. Since the equation of the plane must pass though the origin it will have the form $x+y+z=0$ Can you finish from here? 3. ummm......actually no. Sorry. I mean I know how to find the eigenvalues and then the basis. But this question is weird. 4. Originally Posted by Taurus3 ummm......actually no. Sorry. I mean I know how to find the eigenvalues and then the basis. But this question is weird. We don't need eigenvectors or eigenvalues. We need to find two vectors that span the above plane. e.g we need to find the basis for the null space of this matrix. $\begin{bmatrix} 1 & 1 & 1& 0\end{bmatrix}$ If it helps you can think of it as this matrix with added rows of zeros. $\begin{bmatrix} 1 & 1 & 1& 0 \\ 0 & 0 & 0& 0 \\ 0 & 0 & 0& 0 \\ \end{bmatrix}$ Since this is already in reduced row form We know that we have two free parameters. Let $z=t,y=s$ then $x+t+s=0 \iff x =-t-s$ So the basis of the nullspace is $\begin{pmatrix} x \\ y \\ z\end{pmatrix} = \begin{pmatrix} -t-s \\ s \\ t\end{pmatrix} = \begin{pmatrix} -t \\ 0 \\ t\end{pmatrix}+ \begin{pmatrix} -s \\ s \\ 0\end{pmatrix} = \begin{pmatrix} -1 \\ 0 \\ 1\end{pmatrix}t+\begin{pmatrix} 0\\ -1 \\ 1\end{pmatrix}s$ You can verify that the two above vectors are perpendicular to the first. Can you finish from here? 5. So for my 2nd question, would it just be B= [(1, 1, 1), (0, -1, 1), (-1, 0, 1)]? 6. So for my 2nd question, would it just be B= [(1, 1, 1), (0, -1, 1), (-1, 0, 1)]? No that is the matrix of the linear transformation from the standard basis to your new basis $B$. Hint: In your new basis $T(v_1)=v_1$ and $T(v_2)=T(v_2)=0$ So what is the matrix of this transformation? 7. this is what I don't get. How do you know T(v1)=v1 and that T(v2)=0? 8. Originally Posted by Taurus3 this is what I don't get. How do you know T(v1)=v1 and that T(v2)=0? Since $\displatstyle T(\vec{x})=\text{proj}_{v_1}\vec{x}=\frac{\vec{v_1 }\cdot \vec{x}}{||v_1||^2}\vec{v_1}$ This is the definition of projecting one vector onto another. This will verify the above "claim".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558131098747253, "perplexity": 407.7271346540124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295383.20/warc/CC-MAIN-20160823195815-00138-ip-10-153-172-175.ec2.internal.warc.gz"}
https://gb.costsproject.org/614-uniform-movement.html
# Uniform movement ## S x t diagram There are several ways to represent displacement as a function of time. One is through graphs called offset diagrams. versus time (s) x t). In the following example, we have a diagram showing a retrograde motion: By analyzing the graph, you can extract data that should help you solve problems: s 50m 20m -10m T 0s 1s 2s We then know that the starting position will be the position = 50m when time equals zero. We also know that the final position s = -10m will happen when t = 2s. From there, it is easy to use the space equation and find the body velocity: Know more:The velocity shall be numerically equal to the tangent of the angle formed in relation to the line where it is situated, provided that the trajectory is uniform straight. ## V x t diagram In a uniform motion, the velocity remains the same over time. Therefore your graph is expressed by a line: Given this diagram, one way to determine the displacement of the furniture is to calculate the area under the line comprised in the time interval considered. ## Relative Speed It is the speed of one furniture relative to another. For example: Consider two trains running at uniform speeds and that . Relative speed will be given if we consider that one of the trains (train 1) is stationary and the other (train 2) is moving. That is, your module will be given by . Generally speaking, we can say that the relative velocity is the velocity of a mobile relative to another reference mobile.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260357618331909, "perplexity": 574.9815161547559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00298.warc.gz"}
http://math.stackexchange.com/questions/216303/riemann-stieltjes-integrable
# Riemann-Stieltjes Integrable If $f$ and $g$ are both Riemann-Stieltjes Integrable with respect to a monotonic function $\alpha$, is it ture that $f(g(x))$ is still integrable with respect to $\alpha$? Thanks for your attention. - No. Take $\alpha(x) = 1/x^3$ and $f(x) = g(x) = x^2$. we have $$\int_1^\infty f(x) d\alpha(x) = \int_1^\infty g(x) d\alpha(x) = -3\int_1^\infty \frac 1 {x^2} dx = -3$$ but $$\int_1^\infty f(g(x)) d\alpha(x) = -3\int_1^\infty dx = -\infty$$ Thank you, but when the integral is defined on an closed interval $[a,b]$, does this kind of function still exist? – Golbez Oct 18 '12 at 13:37 @Golbez There are. Take $\alpha(x) = x^{3/2}$, $f(x) = x^2$, $g(x) = 1/x$ on the interval $[0, 1]$. – AlbertH Oct 18 '12 at 13:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869591593742371, "perplexity": 158.4795583491794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00090-ip-10-164-35-72.ec2.internal.warc.gz"}
https://asmedigitalcollection.asme.org/IMECE/proceedings-abstract/IMECE2007/43025/751/328127
This paper experimentally and numerically investigates the effect of large scale high freestream turbulence intensity and exit Reynolds number on the surface heat transfer distribution of a turbine vane in a 2-D linear cascade at realistic engine Mach numbers. A passive turbulence grid was used to generate a freestream turbulence level of 16% and integral length scale normalized by the vane pitch of 0.23 at the cascade inlet. The baseline turbulence level and integral length scale normalized by the vane pitch at the cascade inlet were measured to be 2% and 0.05, respectively. Surface heat transfer measurements were made at the midspan of the vane using thin film gauges. Experiments were performed at exit Mach numbers of 0.55, 0.75 and 1.01 which represent flow conditions below, near, and above nominal conditions. The exit Mach numbers tested correspond to exit Reynolds numbers of 9 × 105, 1.05 × 106, and 1.5 × 106, based on true chord. The experimental results showed that the large scale high freestream turbulence augmented the heat transfer on both the pressure and suction sides of the vane as compared to the low freestream turbulence case and promoted slightly earlier boundary layer transition on the suction surface for exit Mach 0.55 and 0.75. At nominal conditions, exit Mach 0.75, average heat transfer augmentations of 52% and 25% were observed on the pressure and suction side of the vane, respectively. An increased Reynolds number was found to induce earlier boundary layer transition on the vane suction surface and to increase heat transfer levels on the suction and pressure surfaces. On the suction side, the boundary layer transition length was also found to be affected by increase changes in Reynolds number. The experimental results also compared well with analytical correlations and CFD predictions. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883306086063385, "perplexity": 1674.9720651610066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00425.warc.gz"}
https://asmedigitalcollection.asme.org/FEDSM/proceedings-abstract/FEDSM2003/36967/2617/304700
An unsteady behavior of the wall-drag, the velocity and the turbulence is experimentally investigated in the unsteady turbulent pipe flow, under several flow conditions with a rapid to slow time-variation and a large distortion of the flow-rate imposed stepwise. The instantaneous wall-shear stress and three-components of instantaneous velocities are simultaneously measured using the HWA technique of a surface hot-wire and a non-orthogonal triple-sensors probe. In the present study, to understand the unsteady behavior of the relation between the wall-drag and the velocity distribution, the unsteady response of the turbulence to the imposed unsteady flow is investigated by analyzing the relation between the unsteady Reynolds stress and the instantaneous velocity fluctuation during the accelerating and decelerating phase. It is found that, near the pipe-wall, the statistical parameter of the phase-average turbulence structure and the phase-average quadrant contribution rate of instantaneous velocity fluctuations to the Reynolds stress are almost similar to those in the steady turbulent flow, although the drag-reduction and the drag-promotion occurs during the accelerating and decelerating phase respectively. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511054158210754, "perplexity": 1483.238321363276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00140.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2005/vasy/uid7.html
## Section: Overall Objectives ### Models and Verification Techniques By verification, we mean comparison — at some abstraction level — of a complex system against a set of properties characterizing the intended functioning of the system (for instance, deadlock freedom, mutual exclusion, fairness, etc.). Most of the verification algorithms we develop are based on the labeled transition systems (or, simply, automata or graphs ) model, which consists of a set of states, an initial state, and a transition relation between states. This model is often generated automatically from high level descriptions of the system under study, then compared against the system properties using various decision procedures. Depending on the formalism used to express the properties, two approaches are possible: • Behavioral properties express the intended functioning of the system in the form of automata (or higher level descriptions, which are then translated into automata). In such a case, the natural approach to verification is equivalence checking , which consists in comparing the system model and its properties (both represented as automata) modulo some equivalence or preorder relation. We develop equivalence checking tools that compare and minimize automata modulo various equivalence and preorder relations; some of these tools also apply to stochastic and probabilistic models (such as Markov chains). • Logical properties express the intended functioning of the system in the form of temporal logic formulas. In such a case, the natural approach to verification is model checking , which consists in deciding whether the system model satisfies or not the logical properties. We develop model checking tools for a powerful form of temporal logic, the modal -calculus , which we extend with typed variables and expressions so as to express predicates over the data contained in the model. This extension (the practical usefulness of which was highlighted in many examples) provides for properties that could not be expressed in the standard -calculus (for instance, the fact that the value of a given variable is always increasing along any execution path). Although these techniques are efficient and automated, their main limitation is the state explosion problem, which occurs when models are too large to fit in computer memory. We provide software technologies (see §  5.1 ) for handling models in two complementary ways: • Small models can be represented explicitly , by storing in memory all their states and transitions ( exhaustive verification); • Larger models are represented implicitly , by exploring only the model states and transitions needed for the verification ( on the fly verification). Logo Inria
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161536693572998, "perplexity": 782.621908684632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257197.14/warc/CC-MAIN-20190523083722-20190523105722-00351.warc.gz"}