url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://science.sciencemag.org/content/early/2012/12/19/science.1231440
Report # Photonic Boson Sampling in a Tunable Circuit See allHide authors and affiliations Science  20 Dec 2012: 1231440 DOI: 10.1126/science.1231440 ## Abstract Quantum computers are unnecessary for exponentially efficient computation or simulation if the Extended Church-Turing thesis is correct. The thesis would be strongly contradicted by physical devices that efficiently perform tasks believed to be intractable for classical computers. Such a task is boson sampling: sampling the output distributions of n bosons scattered by some linear-optical unitary process. Here, we test the central premise of boson sampling, experimentally verifying that 3-photon scattering amplitudes are given by the permanents of submatrices generated from a unitary describing a 6-mode integrated optical circuit. We find the protocol to be robust, working even with the unavoidable effects of photon loss, non-ideal sources, and imperfect detection. Scaling this to large numbers of photons will be a much simpler task than building a universal quantum computer. View Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310285449028015, "perplexity": 2698.3456413427425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824357.3/warc/CC-MAIN-20171020211313-20171020231313-00747.warc.gz"}
http://www.physicsforums.com/showthread.php?s=e6f3558b378a4180b9281d91951ddf08&p=4775426
# Concerning the Classical Electromagnetism and Gravitation Constants by FysixFox Tags: constants, electromagnetism, gravity PF Gold P: 19 In classical electromagnetism, Coulomb's constant is derived from Gauss's law. The result is: ke = 1/4πε = μc^2/4π = 8987551787.3681764 N·m2/C2 Where ε is the electric permittivity of free space, μ is the magnetic permeability of free space, c is the speed of light in a vaccuum, and 4π is because of how Coulomb's constant is calculated (εEA=Q and F=qE ergo F=qQ/εA, and since A=4πr2 then F=1/4πε * Qq/r2). (on an unrelated side note, the symbol for pi looks funny and I'm not sure if I like it, it's not majestic enough and it looks awkward if I use itex tags to do it... okay, back to topic at hand) The equations for electrostatics and gravity have been compared with almost no end. Even Tesla went so far as to attempt to attribute gravity to electromagnetism (though I don't think he ever published his theory, he just mentioned it). What I'm most interested in is the gravitational constant, G. Working backwards from G, could we perchance find the permeability/permissivity of a vacuum with respect to gravitation just as ke does for electromagnetism? Starting with G and assuming that G = 1/4πε, we find that 1/4πG = ε. The result is 1.1924*109 kg s2 / m3. Now since ε = 1/μc2, that means 1/εc2 = μ. The result for this is 9.3314*10-27 m / kg. What interests me the most is if this could split classical gravitation into two forces that act as one, just as magnetism and electricity do to form electromagnetism. Thoughts? Or am I barking up the wrong tree? This thought just occurred to me and I ran straight here to ask about it. :) Sci Advisor Thanks PF Gold P: 1,908 Maxwell's equations unified two seemingly different forces: electricity and magnetism. This union is usually presented as four linear partial differential equations in terms of the electric and magnetic field vectors, and coupled by the fields. There are source terms for the electric field - the electric charge, which comes in two forms, positive and negative, but no sources for the magnetic field: there are no magnetic charges in Maxwell's theory. From Maxwell's theory the Lorentz transforms of Special Relativity can be derived; the equations are invariant under this transform, but not under the Galilean transform of Newtonian mechanics. This insight leads to additional formulations of Maxwell's laws which perhaps looks simpler at first glance, but all of the same physics is encoded. See http://hyperphysics.phy-astr.gsu.edu...ric/maxeq.html And for a nice introduction: http://www.maxwells-equations.com/ Newton's Universal Law of Gravitation has a single "charge": mass; thus it lacks some of the intricate structure of the electromagnetic equations. Of course the Newtonian system is also incomplete: it needs some changes to make it compatible with Special Relativity. When Einstein was done with this work he had created General Relativity, our modern theory of gravitation. This consists of ten non-linear partial differential equations, coupled in the metric. Space and time are again coupled, as in Special Relativity, but now the coupling is more complex. While there is no direct analog to magnetism in Newton's theory, there are many obvious similarities to static electricity: the law for gravitation and Coulomb's law for electric charges are identical in form except for mass only coming in one form of charge: always attractive, never repulsive. However, there are some magnetic analogs with Newton's theory, and more With General Relativity; this can be called gravitomagnetism: https://en.wikipedia.org/wiki/Gravitoelectromagnetism PF Gold P: 19 Interesting... I wasn't taught anything about Maxwell in school, though I have heard of him somewhere before. So basically, gravity DOES have two components that behave similarly to the electric and magnetic forces, but they're so unnoticeable that it's rarely mentioned to us physics noobs. Interesting! :D But since the GEM equations in the article were only "in a particular limiting case," doesn't that mean you'd have to apply General Relativity to Maxwell's Equations to apply the equations to gravity properly? Thanks PF Gold P: 1,908 Concerning the Classical Electromagnetism and Gravitation Constants Quote by FysixFox But since the GEM equations in the article were only "in a particular limiting case," doesn't that mean you'd have to apply General Relativity to Maxwell's Equations to apply the equations to gravity properly? Maxwell's equations don't need to be changed; they are OK as is. They already obey Special Relativity. The analogy isn't perfect, and is appears when the gravitating body is rotating. In General Relativity this effect shows up directly as "frame dragging". PF Gold P: 19 Quote by UltrafastPED Maxwell's equations don't need to be changed; they are OK as is. They already obey Special Relativity. The analogy isn't perfect, and is appears when the gravitating body is rotating. In General Relativity this effect shows up directly as "frame dragging". Ah, I see. Thank you! :) Related Discussions Classical Physics 1 General Physics 1 Classical Physics 14 Physics Learning Materials 0 Physics Learning Materials 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021292924880981, "perplexity": 881.1165425785213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815991.16/warc/CC-MAIN-20140820021335-00162-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/hesss-law.14533/
# Hess's Law 1. Feb 15, 2004 ### Integral0 The enthalpy of combustion of solid carbon to form carbon dioxide is -393.7 kj/mol carbon, and the enthalpy of combustion of carbon monoxide to form carbon dioxide is -283.3 kj/mol CO. Use these data to calculate change of enthalpy for the reaction 2C(s) + O2(g) -> 2CO(g) ---------------------- I'm quite lost . . . after trying this problem for about 20 mins now . . . I still don't see how to use Hess's Law to formulate this answer What i tried to do was to "form" the equations by the words in the problems. He are my formulations below; however, I can't seem to get the equation mentioned above. 2CO + O2 -> 2CO2 C + O2 -> CO2 when I rearrange these . . . I can't seem to get 2C(s) + O2(g) -> 2CO(g) the answer is -220 Kj/mol thanks Last edited: Feb 15, 2004 2. Feb 15, 2004 ### Integral0 RE: EUREKA!!! EUREKA!!! I GOT IT :) :) 2 (C + O2 -> CO2) change in Enthalpy = 2(-394) kj/mol 2CO2 -> O2 + 2CO change in Enthalpy = 2(239.3) kj/mol = 2C + O2 -> 2CO change in Enthalpy = 221 kj/mol + or - 1 kj/mol :D ;) :) Have something to add? Similar Discussions: Hess's Law
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838408350944519, "perplexity": 4139.997500461798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00244-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.cuemath.com/jee/examples-on-inverse-set-2-trigonometry/
# Examples on Inverse Trigonometry Set 2 Go back to  'Trigonometry' Example-79 If  $${\cos ^{ - 1}}x + {\cos ^{ - 1}}y + {\cos ^{ - 1}}z = \pi ,$$  find the value of  $${x^2} + {y^2} + {z^2} + 2xyz$$ Solution: \begin{align} {\cos ^{ - 1}}x + {\cos ^{ - 1}}y&= \pi - {\cos ^{ - 1}}z \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,&= {\cos ^{ - 1}}( - z)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left( {{\text{How?}}} \right)\\ \end{align} \begin{align}&\Rightarrow \quad {\cos ^{ - 1}}\left( {xy - \sqrt {1 - {x^2}} \sqrt {1 - {y^2}} } \right) = {\cos ^{ - 1}}( - z) \\ &\Rightarrow \quad xy - \sqrt {1 - {x^2}} \sqrt {1 - {y^2}} = - z \\ &\Rightarrow\quad {(xy + z)^2} = (1 - {x^2})(1 - {y^2}) \\ \end{align} Rearranging this yields the required value: ${x^2} + {y^2} + {z^2} + 2xyz = 1$ Example-80 Evaluate the sum of the following series: $S = {\tan ^{ - 1}}\frac{1}{{2 \cdot {1^2}}} + {\tan ^{ - 1}}\frac{1}{{2 \cdot {2^2}}} + {\tan ^{ - 1}}\frac{1}{{2 \cdot {3^2}}} + ....\infty$ Solution: The approach in such questions is what we mentioned earlier: write each term in the series as a difference, so that successive terms cancel out.. The general $${r^{{\text{th}}}}$$ term in S is ${T_r} = {\tan ^{ - 1}}\frac{1}{{2 \cdot {r^2}}}$ We know that \begin{align}{\tan ^{ - 1}}\left( {\frac{{x - y}}{{1 + xy}}} \right) = {\tan ^{ - 1}}x - {\tan ^{ - 1}}y\end{align}. Somehow, we have to use this term to express $${T_r}$$ as a difference. To do that we express the denominator as $$1 + xy:$$ \begin{align} \frac{1}{{2 \cdot {r^2}}} = \frac{2}{{4 \cdot {r^2}}} = \frac{2}{{1 + (4{r^2} - 1)}}&= \frac{2}{{1 + (2r + 1)(2r - 1)}}\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, &= \frac{{(2r + 1) - (2r - 1)}}{{1 + (2r + 1)(2r - 1)}}\\ \end{align} This means that ${T_r} = {\tan ^{ - 1}}(2r + 1) - {\tan ^{ - 1}}(2r - 1)$ Expressing $${T_r}$$ this way solves our problem, since S now becomes $S = ({\tan ^{ - 1}}3 - {\tan ^{ - 1}}1) + ({\tan ^{ - 1}}5 - {\tan ^{ - 1}}3) + ({\tan ^{ - 1}}7 - {\tan ^{ - 1}}5) + ...({\tan ^{ - 1}}(2n + 1) - {\tan ^{ - 1}}(2n - 1))$ $${\text{where}}\;n \to \infty$$. Thus, \begin{align}&S = \mathop {\lim }\limits_{n \to \infty } \;{\tan ^{ - 1}}(2n + 1) - {\tan ^{ - 1}}1 \\ \,\,\,\, &\quad= \frac{\pi }{2} - \frac{\pi }{4} = \frac{\pi }{4} \\ \end{align} Example-81 Find the sum  \begin{align}S = \sum\limits_{r = 1}^\infty {{{\tan }^{ - 1}}} \left( {\frac{{2r}}{{{r^4} + {r^2} + 2}}} \right)\end{align} Solution: The $${r^{th}}$$ term is \begin{align}&{T_r} = {\tan ^{ - 1}}\left( {\frac{{2r}}{{{r^4} + {r^2} + 2}}} \right) \\ &\;\;\;= {\tan ^{ - 1}}\left( {\frac{{({r^2} + r + 1) - ({r^2} - r + 1)}}{{1 + ({r^2} + r + 1)({r^2} - r + 1)}}} \right) \\ &\;\;\;= {\tan ^{ - 1}}({r^2} + r + 1) - {\tan ^{ - 1}}({r^2} - r + 1) \\ \end{align} Therefore, $S = \left( {{{\tan }^{ - 1}}3 - {{\tan }^{ - 1}}1} \right) + \left( {{{\tan }^{ - 1}}7 - {{\tan }^{ - 1}}3} \right) + ... + \left( {{{\tan }^{ - 1}}({n^2} + n + 1) - {{\tan }^{ - 1}}({n^2} - n + 1)} \right)$ where  $$n \to \infty$$ \begin{align}&\Rightarrow \quad S = \mathop {\lim }\limits_{n \to \infty } \;\,{\tan ^{ - 1}}({n^2} + n + 1) - {\tan ^{ - 1}}1 \\ &\quad\qquad\;= \frac{\pi }{2} - \frac{\pi }{4} = \frac{\pi }{4}\\ \end{align} Trigonometry grade 11 | Questions Set 1 Trigonometry Trigonometry grade 11 | Questions Set 2 Trigonometry
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 16, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000030994415283, "perplexity": 4586.876032320438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00566.warc.gz"}
https://computergraphics.stackexchange.com/questions/5955/where-does-the-cosine-factor-comes-from-in-the-ggx-pdf/5958
# Where does the cosine factor comes from in the GGX PDF? The GGX NDF, as it appears on the paper where it is presented is: $$D(m)=\frac{\alpha_g^2\space\chi^+(m\cdot n)}{\pi\cos^4(\theta_m)(\alpha_g^2+\tan^2(\theta_m))^2}$$ It is equivalent, in the range $[0,\frac{\pi}{2}]$, to the following formulation (notation simplified by me): $$D(m)=\frac{\alpha^2}{\pi(\cos^2(\theta)(\alpha^2-1)+1)^2}$$ by finding the derivative of the inverse of the expresions used for importance sampling ($\theta_m=\arctan(\frac{\alpha\sqrt{\xi_1}}{\sqrt{1-\xi_1}})$ and $\phi_m=2\pi\xi_2$) presented in the same paper we can derive the following PDF: $$p(m)=\frac{\alpha^2\cos(\theta)\sin(\theta)}{\pi(\cos^2(\theta)(\alpha^2-1)+1)^2}$$ When finding the corresponding PDF for, for instance, the Blinn NDF the only difference is a $\sin(\theta)$ in the numerator that comes from the fact that the NDF is a distribution over a hemisphere. So where does the $\cos(\theta)$ factor in the GGX PDF come from? I think it is explained after formula (37) in the paper but i don't understand the reasoning behind it. PD:here are the Blinn NDF and PDF as i understand them for reference NDF: $\frac{n+1}{2\pi}\cos^n(\theta)$ --- PDF: $\frac{n+1}{2\pi}\cos^n(\theta)\sin(\theta)$ EDIT: here is a link to the paper in case it makes the question easier to answer • BTW, the Blinn–Phong NDF should have $n + 2$, not $n + 1$, in the numerator (see equation 30 in the paper). – Nathan Reed Dec 5 '17 at 21:49 • Yup, thank you. now I also understand that the exponent in the PDF should be n+1 – Sebastián Mestre Dec 6 '17 at 6:37 Normal distribution functions are defined a bit differently than you might expect. They're not strictly a probability distribution over solid angle; they have to do with the density of microfacets with respect to macro-surface area. The upshot is that they're normalized with an extra cosine factor: $$\int_\Omega D(m) \cos(\theta_m)\, d\omega_m = 1$$ This cosine factor accounts for the projected area of microfacets onto the macrosurface. When importance-sampling the NDF this cosine factor must be accounted for as well. The equations for sampling $D(m) | m \cdot n |$ are: (just above equations 35–36). The $|m \cdot n|$ factor there is equal to $|cos(\theta_m)|$. So the quantity being sampled is not just the NDF itself, but includes this extra factor. Then, as you note, the sine shows up as part of the area element for spherical coordinates.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781273603439331, "perplexity": 528.1509895439403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402124756.81/warc/CC-MAIN-20201001062039-20201001092039-00652.warc.gz"}
http://mathoverflow.net/questions/29285/in-bayesian-statistics-must-i-use-a-marginalized-prior-in-conjunction-with-a-ma
In Bayesian statistics, must I use a marginalized prior in conjunction with a marginalized distribution?// Suppose I have some sampling distribution g(x,y,z) which has been marginalized over some variables (say y and z) giving us the marginal distribution which we'll call gx(x). Suppose I now wish to use Bayes Theorem but on the marginalized distribution to obtain the posterior marginal distribution. Suppose I also know the a good prior for all the variables, call it k(x,y,z).... To use Bayes theorem on the marginalized distribution, must I also marginalize the prior? Or does it makes sense to use the full prior, which of course makes, the answer depend on - No, you cannot marginalize the prior and then multiply the marginal with g(x). Here is why: The correct way to use Bayes theorem is to do the following (also suggested by John): $g_{p1}(x,y,z) \propto g(x,y,z) k(x,y,z)$ Thus, $g_{p1}(x) \propto \int_{y,z} \bigl(g(x,y,z) k(x,y,z) \bigr)$ Your want to do the following: $g_{p2}(x) \propto \int_{y,z} \bigl(g(x,y,z) \bigr) \int_{y,z} \bigl(k(x,y,z) \bigr)$ In general, $g_{p1}(x)$ and $g_{p2}(x)$ will not be identical. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342414736747742, "perplexity": 650.2225321138758}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164997874/warc/CC-MAIN-20131204134957-00010-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/120729-area-isosceles-right-triangle-print.html
# Area of isosceles right triangle • December 15th 2009, 08:32 PM rn5a Area of isosceles right triangle How do I find the area of an isosceles right triangle whose equal sides are 15cm each? Thanks, Ron • December 15th 2009, 08:57 PM bigwave Quote: Originally Posted by rn5a How do I find the area of an isosceles right triangle whose equal sides are 15cm each? Thanks, Ron Area = $\frac{1}{2}(base)(height)$ since base = height and let $s=15cm$ we have Area = $\frac{1}{2}s^2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057186603546143, "perplexity": 878.3756037491722}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829320.91/warc/CC-MAIN-20160723071029-00074-ip-10-185-27-174.ec2.internal.warc.gz"}
https://socratic.org/questions/how-many-grammes-of-zinc-metal-are-required-to-produce-10-0-litres-of-hydrogen-g
Chemistry Topics # How many grammes of zinc metal are required to produce 10.0 litres of hydrogen gas, at 25 Celsius and 1.00 atm pressure, by reaction with excess hydrochloric acid? Zn (s) + 2HCl (aq)  ZnCl2 (aq) + H2 (g) ? Feb 28, 2015 You'd need $\text{26.7 g}$ of zinc to produce that much hydrogen gas at those specific conditions. So, you have your balanced chemical equation $Z {n}_{\left(s\right)} + 2 H C {l}_{\left(a q\right)} \to Z n C {l}_{2 \left(a q\right)} + {H}_{2 \left(g\right)}$ Notice the $\text{1:1}$ mole ratio you have between zinc and hydrogen gas - this means that the moles of zinc that reacted will be equal to the moles of hydrogen gas produced. Use the ideal gas law equation, $P V = n R T$, to solve for the numbe of moles of hydrogen gas produced $P V = n R T \implies n = \frac{P V}{R T} = \left(\text{1.00 atm" * "10.0 L")/(0.082("L" * "atm")/("mol" * "K") * (273.15 + 25)"K}\right)$ ${n}_{{H}_{2}} = \text{0.409 moles }$ ${H}_{2}$ Automatically, this also be the number of moles of zinc that reacted $\text{0.409 moles hydrogen" * "1 mole zinc"/"1 mole hydrogen" = "0.409 moles zinc}$ Now just use zinc's molar mass to determine the exact mass $\text{0.409 moles" * "65.4 g"/"1 mole" = "26.7 g zinc}$ ##### Impact of this question 509 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694581985473633, "perplexity": 3041.215257923156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423140901-00205.warc.gz"}
https://scholarship.rice.edu/handle/1911/21661/browse?value=scale+limited+signals&type=subject
Now showing items 1-1 of 1 • #### Optimal wavelets for signal decomposition and the existence of scale limited signals  (1992-05-20) Wavelet methods give a flexible alternative to Fourier methods in non-stationary signal analysis. The concept of <i>band-limitedness</i> plays a fundamental role in Fourier analysis. Since wavelet theory replaces ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104445934295654, "perplexity": 2525.270051666131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982932803.39/warc/CC-MAIN-20160823200852-00270-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/did-i-do-this-question-right.274748/
Did i do this question right? 1. Nov 24, 2008 lanvin Mars travels around the sun in 1.88 {Earth} yrs,in an approximately circular orbit with a radius of 2.28 * 10^8km.Determine {a}The orbital speed of mars {relative to the sun} period {T} = 1.88 * 365 *24 * 3600 sec , R = 2.28 * 10^8 * 10^2 = 2.28 * 10^11 d = 2 pie R ------> 2 * 3.14 * 2.28 * 10^11 V=d / T -------> { 2 * 3.14 * 2.28 * 10^11 } / { 1.88 * 365 *24 * 3600 } =2.4 * 10^4m/s Is that correct? or do I use acceleration of centripetal motion = (4)(pi^2)(r)(f^2) then use the acceleration and find velocity in V = √(acceleration x radius) which method is correct? if any... Last edited: Nov 24, 2008 2. Nov 25, 2008 Chi Meson You did this correctly, though longishly. And the second way is the same thing just with with extra steps. Notice that your initial solution was to find v=d/t, where d is the circumference, 2 pi r , and t is the period, T. so v=(2 pi r)/T (all done right there) and since f= 1/T , v=2 pi r f Your second solution takes the square of this, divides by radius, then multiplies by radius, then square roots it. Similar Discussions: Did i do this question right?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392129778862, "perplexity": 3056.7350607533976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00452-ip-10-233-31-227.ec2.internal.warc.gz"}
https://msri-prod-alb-1989977079.us-west-1.elb.amazonaws.com/seminars/20020
# Mathematical Sciences Research Institute Home » Modules for elementary abelian groups and vector bundles on projective space # Seminar Modules for elementary abelian groups and vector bundles on projective space March 11, 2013 (04:10 PM PDT - 05:00 PM PDT) Parent Program: -- UC Berkeley, 60 Evans Hall Speaker(s) David Benson (University of Aberdeen) Description No Description Video
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789231777191162, "perplexity": 2456.0824587033453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00389.warc.gz"}
https://socratic.org/questions/how-do-you-write-an-equation-in-standard-form-given-point-5-4-and-slope-4
Algebra Topics # How do you write an equation in standard form given point (-5,4) and slope -4? Jul 21, 2016 $y = - 4 x - 16 \text{ }$ is the equation of the line. #### Explanation: There is a nifty formula which you can use in a case exactly like this where you have a point $\left({x}_{1} , {y}_{1}\right)$ and the slope, $m$ y-y_1 = m(x-(x_1) with $m = - 4 \mathmr{and} \left(- 5 , 4\right)$ Substitute the values... $y - 4 = - 4 \left(x - \left(- 5\right)\right)$ $y - 4 = - 4 \left(x + 5\right) \text{ multiply out and simplify}$ $y = - 4 x - 20 + 4$ $y = - 4 x - 16$ is the equation of the line. ##### Impact of this question 271 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140007495880127, "perplexity": 1041.5758633688476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00508.warc.gz"}
http://mathhelpforum.com/discrete-math/8210-sos-bipartite-graphs.html
## SOS - bipartite graphs How do I prove: Given a connected d-regular graph G, if (-d) is an eigenvalue of G's adjacency matrix, then G is bipartite? (the reverse is pretty easy to see)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935084342956543, "perplexity": 1175.790043604064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663739.33/warc/CC-MAIN-20140930004103-00295-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/none-zero-vector.275544/
# None-zero vector 1. Nov 28, 2008 ### thomas49th 1. The problem statement, all variables and given/known data Hi, this is a really stupid question but what exactly is a non-zero vector. What does it imply? Thanks :) 2. Nov 28, 2008 ### Staff: Mentor Any vector with a non-zero magnitude. As opposed to a zero vector.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593788146972656, "perplexity": 1549.5718019881897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718034.77/warc/CC-MAIN-20161020183838-00337-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.freemathhelp.com/forum/threads/109260-Global-minimum-if-plane-A-is-flying-950km-h-and-plane-is-flying-850km-h
# Thread: Global minimum if plane A is flying 950km/h and plane is flying 850km/h 1. ## Global minimum if plane A is flying 950km/h and plane is flying 850km/h Hello, I would need some help with this please: Airplane A and airplane B move along vertical paths. At some point, from the penetration of their pathways, airplane A is 2300 km away and aircraft B 2000 km. Specify the minimum distance of both airplanes if the airplane A is flying 950km/h and the airplane is flying 850km/h. So, I get that plane A is at A[0,-2300] and plane B is at B[-2000,0] at some point. I know how to find global minimum but I don't know how to make the quadratic equation from these parameters. 2. Originally Posted by martinekk Airplane A and airplane B move along vertical paths. Huh? How can planes fly "vertically"? 3. Couple of things: 1) "Penetration of their pathways" - Is that a point of intersection? If it is, then the minimum distance is 0 km. If it isn't, then this is difficult to interpret. Better translation? 2) If they are both on vertical paths, and you have them both at y = 0, the minimum distance is constant at 300 km. 3) If, as you actually have them, x = 0 and A is chasing B, eventually the distance will be zero. Obviously, that is the minimum distance. Can you provide the EXACT text of the problem (maybe an accompanying drawing) and show YOUR best work? That would be very beneficial. Note: I assumed "vertical" flight meant the positive y-direction from a bird's-eye view. Uh, sorry for my bad english. yeah i mean point of intersection imagine that plane A is copying axis Y and plane B is copying axis X at some point from intersection plane A is 2300 km before intersection and plane B is 2000km before it. but planes have different speed (A=950km/h, B=850km/h) from my calculations B will be there faster (2000/850 is lower than 2300/950) I know that smallest distance between A and B is about 43.15km but I need the equation to calculate it step by step. Thanks for reply and hope you get me now. 5. I think you mean something like this: Originally Posted by martinekk imagine that plane A is following axis Y and plane B is following axis X. At some time plane A is 2300 km before intersection and plane B is 2000km before it, but planes have different speed (A=950km/h, B=850km/h) From my calculations B will be there faster (2000/850 is lower than 2300/950) I know that smallest distance between A and B is about 43.15km but I need the equation to calculate it step by step. I would use the Pythagorean Theorem (distance formula) to express the SQUARE of the distance between the planes at time t, and try to minimize that. It will be a quadratic equation in t. If you need more help, please show whatever work you have done, so we can guide you further. 6. Yeah, thank you. I need to know how to make the square equation from parameters 2000, 2300, 950 and 850 to calculate minimum. Thank you. 7. You need an equation that represents the location. The speed is constant. The equation will be linear. Let's see what you get. 8. To use the distance formula, you need expressions in terms of time (t) for the x-coordinate of plane B and the y-coordinate of plane A. Here's a hint: Let time be measured in seconds. A bug crawls along the x-axis (from right to left). The bug starts at (16,0) and travels 0.35cm each second. An expression for the bug's x-coordinate in terms of t is: 16 - 0.35 t 9. Yeah I think that's enough. Thank you very much. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050763607025146, "perplexity": 1116.0096862523774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00204.warc.gz"}
https://arxiv.org/abs/math-ph/0104022
math-ph (what is this?) # Title: Number operators for Riemannian manifolds Authors: Ed Bueler (U. of Alaska, Fairbanks) Abstract: The Dirac operator d+delta on the Hodge complex of a Riemannian manifold is regarded as an annihilation operator A. On a weighted space L_mu^2 Omega, [A,A*] acts as multiplication by a positive constant on excited states if and only if the logarithm of the measure density of mu satisfies a pair of equations. The equations are equivalent to the existence of a harmonic distance function on M. Under these conditions N=A*A has spectrum containing the nonnegative integers. Nonflat, nonproduct examples are given. The results are summarized as a quantum version of the Cheeger--Gromoll splitting theorem. Comments: 17 pages Subjects: Mathematical Physics (math-ph); Differential Geometry (math.DG) MSC classes: 58J50; 81S10 Cite as: arXiv:math-ph/0104022 (or arXiv:math-ph/0104022v1 for this version) ## Submission history From: Ed Bueler [view email] [v1] Tue, 17 Apr 2001 20:42:22 GMT (15kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822538435459137, "perplexity": 3190.8830002130153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00327.warc.gz"}
http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html
## What my deep model doesn't know... July 3rd, 2015 I come from the Cambridge machine learning group. More than once I heard people referring to us as "the most Bayesian machine learning group in the world". I mean, we do work with probabilistic models and uncertainty on a daily basis. Maybe that's why it felt so weird playing with those deep learning models (I know, joining the party very late). You see, I spent the last several years working mostly with Gaussian processes, modelling probability distributions over functions. I'm used to uncertainty bounds for decision making, in a similar way many biologists rely on model uncertainty to analyse their data. Working with point estimates alone felt weird to me. I couldn't tell whether the new model I was playing with was making sensible predictions or just guessing at random. I'm certain you've come across this problem yourself, either analysing data or solving some tasks, where you wished you could tell whether your model is certain about its output, asking yourself "maybe I need to use more diverse data? or perhaps change the model?". Most deep learning tools operate in a very different setting to the probabilistic models which possess this invaluable uncertainty information, as one would believe. Myself? I recently spent some time trying to understand why these deep learning models work so well – trying to relate them to new research from the last couple of years. I was quite surprised to see how close these were to my beloved Gaussian processes. I was even more surprised to see that we can get uncertainty information from these deep learning models for free – without changing a thing. ##### This is what the uncertainty we'll get looks like. We'll play with this interactive demo below. I've put the models for the examples below here so you can play with them yourself. The models use Caffe for both the neural networks and convolutional neural networks, so you'll have to compile some c++ code to use them. This might or might not be a pain, depending on your experience with this sort of stuff. You can also find here the JavaScript code for the interactive demos below – uncertainty in regression and deep reinforcement learning extending on Karpathy's framework. Update (29/09/2015): I spotted a typo in the calculation of $\tau$; this has been fixed below. If you use the technique with small datasets you should get much better uncertainty estimates now! Also, I added new results to the arXiv paper (Gal and Ghahramani) comparing dropout's uncertainty to the uncertainty of other popular techniques. ## Why should I care about uncertainty? If you give me several pictures of cats and dogs – and then you ask me to classify a new cat photo – I should return a prediction with rather high confidence. But if you give me a photo of an ostrich and force my hand to decide if it's a cat or a dog – I better return a prediction with very low confidence. Researchers use this sort of information all the time in the life sciences (for example have a look at the Nature papers published by Krzywinski and Altman, Herzog and Ostwald, Nuzzo, the entertaining case in Woolston, and the very recent survey by my supervisor Ghahramani). In such fields it is quite important to say whether you are confident about your estimate or not. For example, if your physician advises you to use certain drugs based on some analysis of your medical record, they better know if the person analysing it is certain about the analysis result or not. ##### Deep reinforcement learning example: a Roomba navigating within a room; we'll play with this interactive demo below as well. This sort of information lies at the foundations of artificial intelligence as well. Imagine an agent (a Roomba vacuum for example) that needs to learn about its environment (your living room) based on its actions (rolling around in different directions). It can decide to go forward and might bump into a wall. Encouraging the Roomba not to crash into walls with positive rewards, over time it will learn to avoid them in order to maximise its rewards. This setting is known as reinforcement learning. The Roomba is required to explore its environment looking for these rewards, and that's where uncertainty comes into play. The Roomba will try to minimise its uncertainty about different actions – and trade-off between this exploration, and exploitation of what it already knows. Below we'll go step-by-step through a concrete example following the recent (and now famous) DeepMind paper on deep reinforcement learning. Relying on Karpathy's brilliant JavaScript implementation we'll play with some code demonstrating the point above and see that the model exploiting uncertainty information converges much faster. Uncertainty information is also important for the practitioner. You might use deep learning models on a daily basis to solve different tasks in vision or linguistics. Understanding if your model is under-confident or falsely over-confident can help you get better performance out of it. We'll see below some simplified scenarios where model uncertainty increases far from the data. Recognising that your test data is far from your training data you could easily augment the training data accordingly. ## Ok, how can I get this uncertainty information? For some of these problems above Bayesian techniques would have been used traditionally. And for these problems researchers have recently shifted to deep learning techniques. But this had to come with a cost. I mean, how can you get model uncertainty out of these deep learning tools? (by the way, using softmax to get probabilities is actually not enough to obtain model uncertainty – we'll go over an example below). Well, it's long been known that these deep tools can be related to Gaussian processes, the ones I mentioned above. Take a neural network (a recursive application of weighted linear functions followed by non-linear functions), put a probability distribution over each weight (a normal distribution for example), and with infinitely many weights you recover a Gaussian process (see Neal or Williams for more details). For a finite number of weights you can still get uncertainty by placing distributions over the weights. This was originally proposed by MacKay in 1992, and extended by Neal in 1995 (update: following Yann LeCun's comment below I did some more digging into the history of these ideas; Denker and LeCun have already worked on such ideas in 1991, and cite Tishby, Levin, and Solla from 1989 which place distributions over networks' weights, in which they mention they extend on the ideas suggested in Denker, Schwartz, Wittner, Solla, Howard, Jackel, and Hopfield from 1987, but I didn't go over the book in depth). More recently these ideas have been resurrected under different names with variational techniques by Graves in 2011, Kingma et al. and Blundell et al. from DeepMind in 2015 (although these techniques used with Bayesian neural networks can be traced back as far as Hinton and van Camp dating 1993 and Barber and Bishop from 1998). But these models are very difficult to work with – often requiring many more parameters to optimise – and haven't really caught-on. ##### A network with infinitely many weights with a distribution on each weight is a Gaussian process. The same network with finitely many weights is known as a Bayesian neural network. Using these is quite difficult though, and they haven't really caught-on. Or so you'd think. I think that's why I was so surprised that dropout – a ubiquitous technique that's been in use in deep learning for several years now – can give us principled uncertainty estimates. Principled in the sense that the uncertainty estimates basically approximate those of our Gaussian process. Take your deep learning model in which you used dropout to avoid over-fitting – and you can extract model uncertainty without changing a single thing. Intuitively, you can think about your finite model as an approximation to a Gaussian process. When you optimise your objective, you minimise some "distance" (KL divergence to be more exact) between your model and the Gaussian process. I'll explain this in more detail below. But before this, let's recall what dropout is and introduce the Gaussian process quickly, and look at some examples of what this uncertainty obtained from dropout networks looks like. ## Deep models, Dropout, and Gaussian processes If you feel confident with dropout and Gaussian processes you can safely skip to the next section in which we will learn how to extract the uncertainty information out of dropout networks, and play with the uncertainty we get from neural networks, convolutional neural networks, and deep reinforcement learning. But to get everyone on equal ground let's go over each concept quickly. ### Dropout and Deep models Let's go over the dropout network model quickly for a single hidden layer and the task of regression. We denote by $\W_1, \W_2$ the weight matrices connecting the first layer to the hidden layer and connecting the hidden layer to the output layer respectively. These linearly transform the layers' inputs before applying some element-wise non-linearity $\sigma(\cdot)$. We denote by $\Bb$ the biases by which we shift the input of the non-linearity. We assume the model outputs $D$ dimensional vectors while its input is $Q$ dimensional vectors, with $K$ hidden units. We thus have $\W_1$ is a $Q \times K$ matrix, $\W_2$ is a $K \times D$ matrix, and $\Bb$ is a $K$ dimensional vector. A standard network would output $\widehat{\y} = \sigma(\x \W_1 + \Bb) \W_2$ given some input $\x$. Dropout is a technique used to avoid over-fitting in these simple networks – a situation where the model can't generalise well from its training data to the test data. It was introduced several years ago by Hinton et al. and studied more extensively in (Srivastava et al.). To use dropout we sample two binary vectors $\sBb_1, \sBb_2$ of dimensions $Q$ and $K$ respectively. The elements of vector $\sBb_i$ take value 1 with probability $0 \le p_i \le 1$ for $i = 1,2$. Given an input $\x$, we set $1 - p_1$ proportion of the elements of the input to zero: $\x \sBb_1$ (to keep notation clean we will write $\sBb_1$ when we mean $\diag(\sBb_1)$ with the $\diag(\cdot)$ operator mapping a vector to a diagonal matrix whose diagonal is the elements of the vector). The output of the first layer is given by $\sigma(\x \sBb_1 \W_1 + \Bb)$, in which we randomly set $1 - p_2$ proportion of the elements to zero, and linearly transform to give the dropout model's output $\widehat{\y} = \sigma(\x \sBb_1 \W_1 + \Bb) \sBb_2 \W_2$. We repeat this for multiple layers. ##### Dropout is applied by simply dropping-out units at random with a certain probability during training. To use the network for regression we might use the euclidean loss, $E = \frac{1}{2N} \sum_{n=1}^N ||\y_n - \widehat{\y}_n||^2_2$ where $\{\y_1, ..., \y_N\}$ are $N$ observed outputs, and $\{\widehat{\y}_1, ..., \widehat{\y}_N\}$ are the outputs of the model with corresponding observed inputs $\{ \x_1, ..., \x_N \}$. During optimisation a regularisation term is often added. We often use $L_2$ regularisation weighted by some weight decay $\wd$ (alternatively, the derivatives might be scaled), resulting in a minimisation objective (often referred to as cost), \begin{align*} \label{eq:L:dropout} \cL_{\text{dropout}} := E + \wd \big( &||\W_1||^2_2 + ||\W_2||^2_2 \notag\\ &+ ||\Bb||^2_2 \big). \end{align*} Note that optimising this objective is equivalent to scaling the derivatives of the cost by the learning rate and the derivatives of the regularisation by the weight decay after back-propagation, and this is how this optimisation is often implemented. We sample new realisations for the binary vectors $\sBb_i$ for every input point and every forward pass thorough the model (evaluating the model's output), and use the same values in the backward pass (propagating the derivatives to the parameters to be optimised $\W_1,\W_2,\Bb$). The dropped weights $\sBb_1\W_1$ and $\sBb_2\W_2$ are often scaled by $\frac{1}{p_i}$ to maintain constant output magnitude. At test time we do not sample any variables and simply use the full weights matrices $\W_1,\W_2,\Bb$. This model can easily be generalised to multiple layers and classification. There are many open source packages implementing this model (such as Pylearn2 and Caffe). ### Gaussian processes The Gaussian process is a powerful tool in statistics that allows us to model distributions over functions. It has been applied in both the supervised and unsupervised domains, for both regression and classification tasks (Rasmussen and Williams, Titsias and Lawrence). The Gaussian process offers nice properties such as uncertainty estimates over the function values, robustness to over-fitting, and principled ways for hyper-parameter tuning. Given a training dataset consisting of $N$ inputs $\{ \x_1, ..., \x_N \}$ and their corresponding outputs $\{\y_1, ..., \y_N\}$, we would like to estimate a function $\y = \f(\mathbf{x})$ that is likely to have generated our observations. To keep notation clean, we denote the inputs $\X \in \R^{N \times Q}$ and the outputs $\Y \in \R^{N \times D}$. ##### This is what a Gaussian process posterior looks like with 4 data points and a squared exponential covariance function. The bold blue line is the predictive mean, while the light blue shade is the predictive uncertainty (2 standard deviations). The model uncertainty is small near the data, and increases as we move away from the data points. What is a function that is likely to have generated our data? Following the Bayesian approach we would put some prior distribution over the space of functions $p(\f)$. This distribution represents our prior belief as to which functions are more likely and which are less likely to have generated our data. We then look for the posterior distribution over the space of functions given our dataset $(\X, \Y)$: $$p(\f | \X, \Y) \propto p(\Y | \X, \f) p(\f).$$ This distribution captures the most likely functions given our observed data. We can then perform a prediction with a test point $\x^*$: \begin{align*} p&(\y^* | \x^*, \X, \Y) \\ & = \int p(\y^* | \f^*) p(\f^* | \x^*, \X, \Y) \td \f^* \end{align*} The expectation of $\y^*$ is called the predictive mean of the model, and its variance is called the predictive uncertainty. By modelling our distribution over the space of functions with a Gaussian process we can analytically evaluate its corresponding posterior in regression tasks, and estimate the posterior in classification tasks. In practice what this means is that for regression we place a joint Gaussian distribution over all function values, \begin{align*} \label{eq:generative_model_reg} \F \svert \X &\sim \N(\bz, \K(\X, \X)) \\ \Y \svert \F &\sim \N(\F, \tau^{-1} \I_N) \notag \end{align*} with some precision hyper-parameter $\tau$ and where $\I_N$ is the identity matrix with dimensions $N \times N$. To model the data we have to choose a covariance function $\K(\X, \X)$ for the Gaussian distribution. This function defines the similarity between every pair of input points $\K(\mathbf{x}_i, \mathbf{x}_j)$. Given a finite dataset of size $N$ this function induces an $N \times N$ covariance matrix $\K := \K(\X, \X)$. This covariance represents our prior belief as to what the model uncertainty should look like. For example we may choose a stationary squared exponential covariance function, for which model uncertainty increases far from the data. Certain non-stationary covariance functions correspond to TanH (hyperbolic tangent) or ReLU (rectified linear) non-linearities. We will see in the examples below what uncertainty these capture. If you want to know how these correspond to the Gaussian process, this is explained afterwards. ## Fun with uncertaintyfun Ok. Let's have some fun with our dropout networks' uncertainty. We'll go over some cool examples of regression and image classification, and in the next section we'll play with deep reinforcement learning. But we still didn't talk about how to get the uncertainty information out of our dropout networks... Well, that's extremely simple. Take any network trained with dropout and some input $\x^*$. We're looking for the expected model output given our input — the predictive mean $\mathbb{E} (\y^*)$ — and how much the model is confident in its prediction — the predictive variance $\Var \big( \y^* \big)$. As we'll see below, our dropout network is simply a Gaussian process approximation, so in regression tasks it will have some model precision (the inverse of our assumed observation noise). How do we get this model precision? (update: I spotted a typo in the calculation of $\tau$; this has now been fixed) First, define a prior length-scale $l$. This captures our belief over the function frequency. A short length-scale $l$ corresponds to high frequency data, and a long length-scale corresponds to low frequency data. Take the length-scale squared, and divide it by the weight decay. We then scale the result by half the dropout probability over the number of data points. Mathematically this results in our Gaussian process precision $\tau = \frac{l^2 p}{2 N \wd}$ we talked about above (want to see why? go over the section Why does it even make sense? below). Note that $p$ here is the probability of the units not being dropped — in most implementations $p_{\text{code}}$ is defined as the probability of the units to be dropped, thus $p:=1-p_{\text{code}}$ should be used when calculating $\tau$. Next, simulate a network output with input $\x^*$, treating dropout as if we were using it during training time. By that I mean simply drop-out random units in the network at test time. Repeat this several times (say for $T$ repetitions) with different units dropped every time, and collect the results $\{ \widehat{\y}_t^*(\x^*) \}$. These are empirical samples from our approximate predictive posterior. We can get an empirical estimator for the predictive mean of our approximate posterior as well as the predictive variance (our uncertainty) from these samples. We simply follow these two simple equations: \begin{align*} \mathbb{E} (\y^*) &\approx \frac{1}{T} \sum_{t=1}^T \widehat{\y}_t^*(\x^*) \\ \Var \big( \y^* \big) &\approx \tau^{-1} \I_D \\ &\quad+ \frac{1}{T} \sum_{t=1}^T \widehat{\y}_t^*(\x^*)^T \widehat{\y}_t^*(\x^*) \\ &\quad- \mathbb{E} (\y^*)^T \mathbb{E} (\y^*) \end{align*} The first equation was given before in (Srivastava et al.). There it was introduced as model averaging, and it was explained that scaling the weights at test time without dropout gives a reasonable approximation to this equation. This claim was supported by an empirical evaluation. In the next blog post we will see that for some networks (such as convolutional neural networks) this approximation is not sufficient, and can be improved considerably. The second equation is simply the sample variance of $T$ forward passes through the network plus the inverse model precision. Note that the vectors above are row vectors, and that the products are outer-products. As you'd expect, it is as easy to implement these two equations. We can use the following few lines of Python code to get the predictive mean and uncertainty: probs = [] for _ in xrange(T): probs += [model.output_probs(input_x)] predictive_mean = numpy.mean(prob, axis=0) predictive_variance = numpy.var(prob, axis=0) tau = l**2 * (1 - model.p) / (2 * N * model.weight_decay) predictive_variance += tau**-1 ##### Python code to obtain predictive mean and uncertainty from dropout networks Let's have a look at this uncertainty for a simple regression problem. ### A simple interactive demo What does our dropout network uncertainty look like? That's an important question, as different network structures and different non-linearities would correspond to different prior beliefs as to what we expect our uncertainty to look like. This property is shared with the Gaussian process as well. Different Gaussian process covariance functions would result in different uncertainty estimates. Below is a simple demo (extending on Karpathy's deep learning JavaScript framework) that performs regression with a tiny dropout network. We use 2 hidden layers with 20 ReLU units and 20 sigmoid units, and dropout with probability $0.05$ (as the network is really small) before every weight layer. After every mini-batch update we evaluate the network on the entire space. We plot the current stochastic forward pass through the dropout network (black line), as well as the average of the last 100 stochastic forward passes (predictive mean, blue line) as we learn. Shades of blue denote half a predictive standard deviation for our estimate. You can find the code for this demo here. Browser not supported for Canvas. Get a real browser. ##### Simple regression problem. Current stochastic forward pass through the dropout network is plotted in grey and the mean of the last 100 forward passes is plotted in blue. Shades of blue denote half a standard deviation. You can click on the plot to add new data points and see how the uncertainty changes! Try adding a group of new points to the far right and see how the uncertainty changes between these and the points at the centre (you might have to restart the network training after adding new points). Now that we have a feeling as to what the uncertainty looks like, we can perform some simple regression experiments to examine the properties of our uncertainty estimate with different networks and different tasks. We'll extend on this to classification and deep reinforcement learning in the following sections. ### Understanding our models ##### This is what the CO$_2$ dataset looks like before pre-processing. To see what the uncertainty looks like far away from our training data, we'll use a simple one dimensional regression problem with a subset of the atmospheric CO$_2$ concentrations dataset. This dataset was derived from in situ air samples collected at Mauna Loa Observatory, Hawaii, since 1958 (you can get the raw data from the U.S. Department of Commerce National Oceanic and Atmospheric Administration, collected by Keeling et al.). The dataset is rather small consisting of about 200 data points. We'll centre and normalise it. So what does our data look like? to the right you'll see the raw data, and below you'll see the processed training data (in red, left of the dashed blue line) and a missing section to the right of the dashed blue line. I already fitted a neural network with 5 hidden layers, 1024 units in each layer, and ReLU non-linearities to the data. I used dropout with probability $0.1$ after each weight layer (as the network is fairly small). ##### Standard dropout network without using uncertainty information (5 hidden layers, ReLU non-linearity) The point marked with a dashed red line is a point far away from the training data. I hope you would agree with me that a good model when asked about point $x^*$ should probably say "if you force my hand I'll guess X, but actually I have no idea what's going on". Standard dropout confidently predicts a clearly insensible value for the point — as the function is periodic. We can't really tell whether we can trust the model's prediction or not. Now let's use the uncertainty information equations we introduced above. Using exactly the same network (we don't have to re-train the model, just perform predictions in a different way) we get the following revealing information: ##### Exactly the same dropout network performing predictions using uncertainty information and predictive mean (5 hidden layers, ReLU non-linearity). Each shade of blue represents half a standard deviation predictive uncertainty. It seems that the model does capture a large amount of uncertainty information about the far away test point! If we were analysing some data for a physician, now we could advise the physician not to recommend some treatment as the results are inconclusive. If we were trying to solve some task we now know that we need to augment our dataset and collect more data. This uncertainty information is very similar to the uncertainty that we would get from a Gaussian process with a squared exponential covariance function: ##### Gaussian process with SE covariance function on the same dataset We got a different uncertainty estimate that still increases far from the data. It is not surprising that the uncertainty looks different — the ReLU non-linearity corresponds to a different Gaussian process covariance function. What would happen if we were to use a different non-linearity (i.e. a different covariance function)? Let's try a TanH non-linearity: ##### Dropout network using uncertainty information and TanH non-linearity (5 hidden layers) It seems that the uncertainty doesn't increase far from the data... This might be because TanH saturates whereas ReLU does not. This non-linearity will not be appropriate for tasks where you'd expect your uncertainty to increase as you go further away from the data. What about different network architectures or different dropout probabilities? For the TanH model I tested the uncertainty using both dropout probability $0.1$ and dropout probability $0.2$. The models initialised with dropout probability $0.1$ initially do show smaller uncertainty than the ones initialised with dropout probability $0.2$, but towards the end of the optimisation, when the model has converged, the uncertainty is almost undistinguishable. It also seems that using a smaller number of layers doesn't affect the resulting uncertainty — experimenting with 4 hidden layers instead of 5 hidden layers there was no significant difference in the results. As a side note, it's quite funny to mention that it took me 5 minutes to get the Gaussian process to fit the dataset above, and 3 days for the dropout networks. But that's probably just because I've never worked with these models before, and none of my Gaussian process intuition carried over to the multiple layers case. Also, a single hidden layer network works just as well, sharing many characteristics with the shallow Gaussian process. ##### This is what the solar irradiance dataset looks like before pre-processing (in brown). Let's look at an interpolation example. We'll use the reconstructed solar irradiance dataset which is described in a nice article on NASA's website (Lean, 2004). This dataset has similar characteristics to the previous one and we process it in a similar way. But this time, instead of extrapolating far from the data, we'll interpolate missing sections in the dataset. Just to get a sense of what the data looks like, to the right is the raw data and below is a Gaussian process with a squared exponential covariance function fitted to the processed data: ##### Gaussian process with SE covariance function Again, the observed function is given in red, with missing sections given in green. In blue is the model's prediction both on the observed subsets and missing subsets. In light blue is predictive uncertainty with 2 standard deviations. It seems that the model manages to capture the function very well with with increased uncertainty over the missing sections. Let's see what a dropout network looks like on this dataset. We'll use the same network structure as before with 5 hidden layers and ReLU non-linearities: ##### Dropout network using uncertainty information on the same dataset (5 hidden layers, ReLU non-linearity) It seems that the model fits the data very well as well, but with smaller model uncertainty. This is actually a well known limitation of variational approximations. As dropout can be seen as a variational approximation to the Gaussian process, this is not surprising at all. It is possible to correct this under-estimation of the variance and I will write another post about this in the coming future. ### Image Classification Let's look at a more interesting example — image classification. We'll classify MNIST digits (LeCun and Cortes) using the popular LeNet convolutional neural network (LeCun et al.). In this model we feed our prediction into a softmax which gives us probabilities for the different classes (the 10 digits). Interestingly enough, these probabilities are not enough to see if our model is certain in its prediction or not. This is because the standard model would pass the predictive mean through the softmax rather than the entire distribution. Let's look at an idealised binary classification example. Passing a point estimate of the mean of a function (a TanH function for simplicity, solid line on the left in the figure below) through a softmax (solid line on the right in the figure below) results in highly confident extrapolations with $x^*$ (a point far from the training data) classified as class $1$ with probability $1$. However, passing the distribution (shaded area on the left) through a softmax (shaded area on the right) reflects classification uncertainty better far from the training data. Taking the mean of this distribution passed through the softmax we get class $1$ with probability $0.5$ — the model's true prediction. ##### A sketch of softmax input and output for an idealised binary classification problem. Training data is given between the dashed grey lines. Function point estimate is shown with a solid line (TanH for simplicity — left). Function uncertainty is shown with a shaded area. Marked with a dashed red line is a point $x^*$ far from the training data. Ignoring function uncertainty, point $x^*$ is classified as class 1 with probability 1. Ok, so let's pass the entire distribution through the softmax instead of the mean alone. This is actually very easy — we just simulate samples through the network and average the softmax output. Let's evaluate our convnet trained over MNIST with dropout applied after the last inner-product layer (with probability 0.5 as the network is fairly large). We'll evaluate model predictions over the following sequence of images, that correspond to some projection in the image space: ##### Image inputs (a rotated digit) to our dropout LeNet network These images correspond to our $X$ axis in the idealised depiction above. Let's visualise the histogram of 100 samples we obtain by simulating forward passes through our dropout LeNet network: ##### A scatter of 100 forward passes of the softmax input and output for dropout LeNet. On the $X$ axis is a rotated image of the digit 1. The input is classified as digit 5 for images 6-7, even though model uncertainty is extremly large. Class 7 has low uncertainty for the right-most images. This is because the uncertainty envelope'' of the softmax input for these images is far away from the uncertainty envelopes of all other classes. In contrast, the uncertainty envelope of class 5 for the middle images intersects the envelopes of some of the other classes (even though it's mean is higher) — resulting in large uncertainty for the softmax output. It is important to note that the model uncertainty in the softmax output can be summarised by taking the mean of the distribution. In the idealised example above this would result in softmax output $0.5$ for point $x^*$ (instead of softmax output $1$) and here it will result in a lower softmax output that might result in a different image class. This sort of information can help us in classification tasks — obtaining higher classification accuracies as will be explained in the next blog post. It also helps us analyse our model and decide whether we have enough data or if the model is specified correctly. ## Uncertainty in deep reinforcement learning Let's do something more exciting with our uncertainty information. We can use this information for deep reinforcement learning, where we use a network to learn what actions an independent agent (for example a Roomba or a humanoid robot) should take to solve a given task (for example clean your apartment or eliminate humanity). In reinforcement learning we get rewards for different actions, for example a $-5$ reward for walking into a wall and a $+10$ reward for collecting food (or dirt if you're a Roomba). However our environment can be stochastic and ever changing. We want to maximise our expected reward over time, basically learn what actions are appropriate in different situations. For example, if I see food in front of me I should go forward rather than go left. We do this by exploring our environment, ideally minimising our uncertainty about rewards resulting from different states and actions we take. Recent research from DeepMind (a blue skies research company acquired by Google a couple of years ago), through heavy engineering efforts, managed to demonstrate human level game playing of various Atari games. They used a neural network to control what actions the agent should take in different states (Mnih et al.). This approach is believed by some to be a starting point towards solving AI. This work by DeepMind extends on several existing approaches in the field (for example see neural fitted Q-learning by Riedmiller). However no agent uncertainty is modelled with this approach, and an epsilon-greedy behavioural policy is used instead (taking random actions with probability $\epsilon$ and optimal actions otherwise). Gaussian processes have been used in the past to represent agent uncertainty, but did not scale well beyond simple toy problems (see PILCO for example, Deisenroth and Rasmussen). Regarding our dropout network as a Gaussian process approximation we can use its uncertainty estimates for the same task without any additional cost. ##### Our deep reinforcement learning setting. Let's look at a more concrete example. We'll use Karpathy's JavaScript implementation as a running example. Karpathy implemented the main ideas from Mnih et al. within a simpler setting that can run straight in your browser without the need to install anything. Instead of an Atari game, we simulate an agent in a 2D world with 9 eyes pointing in different angles ahead (depicted in the figure to the right). Each eye can sense a single pixel intensity of 3 colours. The agent navigates by using one of 5 actions controlling two motors at its base. Each action turns the motors at different angles and different speeds. The environment consists of red circles which give the agent a positive reward of $+5$ for reaching, and yellow circles which result in a negative reward of $-6$. The agent is further rewarded for not looking at (white) walls, and for walking in a straight line. I had to change the reward function slightly as it seems the agents I was playing with managed to exploit a bug in it (looking at a wall with 4 eyes while walking into it resulted in a positive reward; I basically removed the line proximity_reward = Math.min(1.0, proximity_reward * 2); in the rldemo.js file). At every point in time the agent evaluates its belief as to the quality of all possible actions it can take in its current state (denoted as the $Q$-function), and then proceeds to take the action with the highest value. This $Q$-function captures the agent's subjective belief of the value of a given state $s$. This value is defined as the expected reward over time resulting from taking action $a$ at state $s$, with the expectation being with respect to the agent's current understanding of how the world works (you can read more about reinforcement learning in this very nice book by Szepesvári). We use replay'' to train our network — in which we collect tuples of state, action, resulting state, and resulting reward as training data for the network. We then perform one gradient descent step with a mini-batch composed of a random subset of the collection of tuples. The behavioural policy used in Mnih et al. and Karpathy's implementation is epsilon-greedy with a decreasing schedule. In this policy the agent starts by performing random actions to collect some data, and then reduces the probability of performing random actions. Instead of performing random actions the agent would evaluate the network's output on the current state and choose the action with the highest value. Now, instead of that we can try to minimise our network's uncertainty. This is actually extremely simple: we just use Thompson sampling (Thompson). Thompson sampling is a behavioural policy that encourages the agent to explore its surrounding by drawing a realisation from our current belief over the world and choosing the action with the highest value following that belief. In our case this is simply done by simulating a stochastic forward pass through the dropout network and choosing the action with the highest value. If you can't run the JavaScript below (it takes some time to converge...), this is what the average reward over time looks like for both epsilon greedy and Thompson sampling (on log scale): ##### Average reward over time for epsilon greedy (green) and Thompson sampling (blue) on log scale. Thompson sampling converges an order of magnitude faster than epsilon greedy. Below you'll see the agents learning an appropriate policy over time. For epsilon greedy I use exactly the same implementation as Karpathy's, and for dropout I added a single dropout layer with probability $0.2$ (as the network is fairly small). The top of the plot is a graph showing the average reward on log scale. The first 3000 moves for both agents are random, used to collect initial data before training. This is shown with a red shade in the graph. The algorithm is stochastic, so don't be surprised if sometimes the blue curve (Thompson sampling) goes below the green one (epsilon greedy). Note that the $X$ axis of the graph shows the number of batches divided by $100$. You can find the code for this demo here. ##### Deep reinforcement learning demo with two behavioural policies: epsilon greedy (green), and Thompson sampling using dropout uncertainty (blue). The agents (blue and green discs) are rewarded for eating red things and walking straight, and penalised for eating yellow things and walking into walls. Both agents move at random for the first 3000 moves (the red shade in the graph). The $X$ axis of the plot shows the number of batches divided by 500 on log scale and the $Y$ axis shows average reward. (The code seems to work quickest on Chrome). It's interesting to mention that apart from the much faster convergence, using dropout also circumvents over-fitting in the network. But because the network is so small we can't really use dropout properly — after every layer — as the variance would be too large. We'll discuss this in more detail below in diving into the derivation. It's also worth mentioning some difficulties with Thompson sampling. As we sample based on the model uncertainty we might get weird results from under-estimation of the uncertainty. This can be fixed rather easily though and will be explained in a future post. Another difficulty is that the algorithm doesn't distinguish between uncertainty about the world (which is what we care about) and uncertainty resulting from misspecification of our network. So if our network is under-fitting its data, not being able to reduce its uncertainty appropriately, the model will suffer. ## Why does it even make sense? Let's see why dropout neural networks are identical to variational inference in Gaussian processes. We'll see that what we did above, averaging forward passes through the network, is equivalent to Monte Carlo integration over a Gaussian process posterior approximation. The derivation uses some long equations that mess-up the page layout on mobile devices – so I put it here with a toggle to show and hide it easily. Tap here to show the derivation: Derivation ## Diving into the derivation The derivation above sheds light on many interesting properties of dropout and other tricks of the trade'' used in deep learning. Some of these are described in the appendix of (Gal and Ghahramani). Here we'll go over deeper insights arising from the derivation. I'd like to thank Mark van der Wilk for some of the questions raised below. First, can we get uncertainty estimates over the weights in our dropout network? it might be a bit difficult to see this with the Bernoulli case, so for simplicity imagine that our approximating distribution is a single Gaussian distribution parametrised by its mean and standard deviation. In our variational inference setting we fit the entire distribution to our posterior, trying to approximate it as best we can. This involves optimising over the variational parameters — the mean and the standard deviation. This is standard in variational inference where we fit distributions rather than parameters, resulting in our robustness to over-fitting. The fitted Gaussian distribution captures our uncertainty estimate over the weights. Now, it's the same for the Bernoulli case. Optimising our variational lower bound we are matching the Bernoulli approximate posterior to our true posterior. For comparison, the MAP estimate is obtained when we use a single delta function in our approximating distribution, in which case our integration over the parameters collapses to a single point estimate. Our Bernoulli approximation is a sum of two delta functions, resulting in the averaging of many possible models. The non-zero component in our mixture of Gaussians is the variational parameter we optimise over to fit the distributions. But the Bernoullis are not important at all for this! DropConnect (Wan et al.) or Multiplicative Gaussian Noise (section 10 in Srivastava et al.) follow our interpretation exactly (and are equivalent to alternative approximating distributions). The example above of a single Gaussian can be neatly extended to these. So in short, yes, we can get uncertainty estimates over the features — this is the posterior $q_\theta(\bo)$ above! Why is the variance estimation of the model sensible? In the mixture of Gaussians approximating distribution (which is used to approximate the Bernoulli distribution) only the mean changes and the variance is fixed, aren't they? That's actually a bit more difficult to see. If we use our approximating distribution recursively in each layer and the architecture is deep enough, the mean of the Gaussians would change to best represent the uncertainty we can't change. This is because we minimise the KL divergence from the full posterior, which would result in the approximating model fitting not just the first moment of the posterior (our predictive mean) but also the second moment (resulting in a sensible variance estimate). That's why optimising the mean alone for our variational distribution, even though we have fixed uncertainty for each single distribution, the model as a whole will approximate the posterior uncertainty the best it can. For the same reason it seems that optimising the dropout probabilities is not that important as long as they are sensible. During optimisation the weight means (our variational parameters) will simply adapt to match the true posterior as far as the model architecture allows it. Although, as mentioned in the appendix of (Gal and Ghahramani), we can also optimise over the dropout probabilities as these are just variational parameters. It's also quite cool to see that the dropout network, which was developed following empirical experimentation, is equivalent to using a popular variance reduction technique in our Gaussian process approximation above. More specifically, in the full derivation in the appendix of (Gal and Ghahramani), to match the dropout model, we have to re-parametrise the model so the random variables do not depend on any parameters, thus reducing the variance in our Monte Carlo estimator. You can read more about this in Kingma and Welling. This can also explain why dropout under-performs for networks which are small in comparison to dataset size. Presumably the estimator variance is just too large. The above developments also suggest a new interpretation into why dropout works so well as a regularisation technique. At the moment it is thought in the field that dropout works because of the noise it introduces. I would say that the opposite is true: dropout works despite the noise it introduces!. By that I mean that the noise, interpreted as approximate integration, is a side-effect of integrating over the model parameters. If we could we would evaluate the integrals analytically without introducing this additional noise. Indeed, that's what many approaches to Bayesian neural networks do in practice. It is also interesting to note that the posterior in the Gaussian process approximation above is not actually a Gaussian process itself. By integrating over the variables $\bo$ (which can also be seen as integrating over a random covariance function) the resulting distribution does not have Gaussian marginals any more. It is conditionally Gaussian however — when we condition on a specific covariance function we get Gaussian marginals. This is also what happens when we put prior distributions over the Gaussian process hyper-parameters or priors over the covariance function, such as a Wishart process prior (Shah et al.). For a Wishart process prior we have analytical marginals, but the resulting distribution is not expressive enough for many applications. Another property of the approximation above, through the integration over the covariance function, is that we actually change the feature space over which the Gaussian process is defined. In normal Gaussian process models we have a fixed feature space given by a deterministic covariance function and only the weights of the different features change a-posteriori. In our case the covariance function is random and has a posterior conditioned on observed data. The Gaussian process feature space thus changes a-posteriori. ## What's next I think future research at the moment should concentrate on better uncertainty estimates for our models above. The fact that we can use Bernoulli approximating distributions to get reasonably good uncertainty estimates helps us in computationally demanding settings, but with alternative approximating distributions we should be able to improve on these uncertainty estimates. Using multiplicative Gaussian noise, multiplying the units by $\N(1,1)$ for example, might result in more accurate uncertainty estimates, and many other similarly expressive yet computationally efficient distributions exist out there. It will be really interesting to see principled and creative use of simple distributions that would result in powerful uncertainty estimates. ## Source code I put the models used with the examples above here so you can play with them yourself. The models use Caffe for both the neural networks and convolutional neural networks. You can also find here the code for the interactive demos using Karpathy's framework. ## Conclusions We saw that we can get model uncertainty from existing deep models without changing a single thing. Hopefully you'll find this useful in your research, be it data analysis in bioinformatics or image classification in vision systems. In the next post I'll go over the main results of Gal and Ghahramani showing how the insights above can be extended to get Bayesian convolutional neural networks, with state-of-the-art results on CIFAR-10. In a future post we'll use model uncertainty for adversarial inputs such as corrupted images that classify incorrectly with high confidence (have a look at intriguing properties of neural networks or breaking linear classifiers on ImageNet for more details). Adding or subtracting a single pixel from each input dimension is perceived as almost unchanged input to a human eye, but can change classification probabilities considerably. In the high dimensional input space the new corrupted image lies far from the data, and model uncertainty should increase for such inputs. If you'd like to learn more about Gaussian processes you can watch Carl Rasmussen's video lecture, Philipp Hennig's video lectures, or have a look at some notes from past Gaussian process summer schools. You can also go over the Gaussian Processes for Machine Learning book, available online. I also have several other past projects involving Gaussian processes, such as distributed inference in the Gaussian process with Mark van der Wilk and Carl E. Rasmussen (NIPS 2014), distribution estimation of vectors of discrete variables with stochastic variational inference with Yutian Chen and Zoubin Ghahramani (ICML 2015), variational inference in the sparse spectrum approximation to the Gaussian process with Richard Turner (ICML 2015), and a quick tutorial to Gaussian processes with Mark van der Wilk on arXiv. Our developments above also show that dropout can be seen as approximate inference in Bayesian neural networks, which I'll explain in more depth in the next post. In the mean time, for interesting recent research on Bayesian neural networks you can go over variational techniques to these (Graves from 2011, Gal and Ghahramani, Kingma et al. and Blundell et al. from 2015), Bayesian Dark Knowledge by Korattikara et al., Probabilistic Backpropagation by Miguel Hernández-Lobato and Ryan Adams, and stochastic EP by Li et al. ## Acknowledgements I'd like to thank Christof Angermueller, Roger Frigola, Shane Gu, Rowan McAllister, Gabriel Synnaeve, Nilesh Tripuraneni, Yan Wu, and Prof Yoshua Bengio and Prof Phil Blunsom for their helpful comments on either the papers or the blog post above or just in general. Special thanks to Mark van der Wilk for the thought-provoking discussions on the approximation properties. ## Citations Do you want to use these results for your research? You can cite Gal and Ghahramani (or download the bib file directly). There are also lots more results in the paper itself. ## Contact me #### Email yg279 -at- cam.ac.uk #### Post Cambridge University Engineering Department Cambridge, CB2 1PZ United Kingdom
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508252501487732, "perplexity": 628.146225654258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744803.68/warc/CC-MAIN-20181119002432-20181119023607-00019.warc.gz"}
https://mathhothouse.me/category/isi-kolkatta-entrance-exam/
## Category Archives: ISI Kolkatta Entrance Exam ### Circles and System of Circles: IITJEE Mains: some solved problems I Part I: Multiple Choice Questions: Example 1: Locus of the mid-points of the chords of the circle $x^{2}+y^{2}=4$ which subtend a right angle at the centre is (a) $x+y=2$ (b) $x^{2}+y^{2}=1$ (c) $x^{2}+y^{2}=2$ (d) $x-y=0$ Solution 1: Let O be the centre of the circle $x^{2}+y^{2}=4$, and let AB be any chord of this circle, so that $\angle AOB=\pi /2$. Let $M(h,x)$ be the mid-point of AB. Then, OM is perpendicular to AB. Hence, $(AB)^{2}=(OA)^{2}+(AM)^{2}=4-2=2 \Longrightarrow h^{2}+k^{2}=2$. Therefore, the locus of $(h,k)$ is $x^{2}+y^{2}=2$. Example 2: If the equation of one tangent to the circle with centre at $(2,-1)$ from the origin is $3x+y=0$, then the equation of the other tangent through the origin is (a) $3x-y=0$ (b) $x+3y=0$ (c) $x-3y=0$ (d) $x+2y=0$. Solution 2: Since $3x+y=0$ touches the given circle, its radius equals the length of the perpendicular from the centre $(2,-1)$ to the line $3x+y=0$. That is, $r= |\frac{6-1}{\sqrt{9+1}}|=\frac{5}{\sqrt{10}}$. Let $y=mx$ be the equation of the other tangent to the circle from the origin. Then, $|\frac{2m+1}{\sqrt{1+m^{2}}}|=\frac{5}{\sqrt{10}}=25(1+m^{2})=10(2m+1)^{2} \Longrightarrow 3m^{2}+8m-3=0$, which gives two values of m and hence, the slopes of two tangents from the origin, with the product of the slopes being -1. Since the slope of the given tangent is -3, that of the required tangent is 1/3, and hence, its equation is $x-3y=0$. Example 3. A variable chord is drawn through the origin to the circle $x^{2}+y^{2}-2ax=0$. The locus of the centre of the circle drawn on this chord as diameter is (a) $x^{2}+y^{2}+ax=0$ (b) $x^{2}+y^{2}+ay=0$ (c) $x^{2}+y^{2}-ax=0$ (d) $x^{2}+ y^{2}-ay=0$. Solution 3: Let $(h,k)$ be the centre of the required circle. Then, $(h,k)$ being the mid-point of the chord of the given circle, its equation is $hx+ky-a(x+h)=h^{2}+k^{2}-2ah$. Since it passes through the origin, we have $-ah=h^{2}+k^{2}-2ah \Longrightarrow h^{2}+k^{2}-ah=0$. Hence, locus of $(h,k)$ is $x^{2}+y^{2}-ax=0$. Quiz problem: A line meets the coordinate axes in A and B. A circle is circumscribed about the triangle OAB. If m and n are the distances of the tangent to the circle at the origin from the points A and B respectively, the diameter of the circle is (a) $m(m+n)$ (b) $m+n$ (c) $n(m+n)$ (d) $(1/2)(m+n)$. To be continued, Nalin Pithwa. ### Cartesian System, Straight Lines: IITJEE Mains: Problem Solving Skills II I have a collection of some “random”, yet what I call ‘beautiful” questions in Co-ordinate Geometry. I hope kids preparing for IITJEE Mains or KVPY or ISI Entrance Examination will also like them. Problem 1: Given n straight lines and a fixed point O, a straight line is drawn through O meeting lines in the points $R_{1}$, $R_{2}$, $R_{3}$, $\ldots$, $R_{n}$ and on it a point R is taken such that $\frac{n}{OR} = \frac{1}{OR_{1}} + \frac{1}{OR_{2}} + \frac{1}{OR_{3}} + \ldots + \frac{1}{OR_{n}}$ Show that the locus of R is a straight line. Solution 1: Let equations of the given lines be $a_{i}x+b_{i}y+c_{i}=0$, $i=1,2,\ldots, n$, and the point O be the origin $(0,0)$. Then, the equation of the line through O can be written as $\frac{x}{\cos{\theta}} = \frac{y}{\sin{\theta}} = r$ where $\theta$ is the angle made by the line with the positive direction of x-axis and r is the distance of any point on the line from the origin O. Let $r, r_{1}, r_{2}, \ldots, r_{n}$ be the distances of the points $R, R_{1}, R_{2}, \ldots, R_{n}$ from O which in turn $\Longrightarrow OR=r$ and $OR_{i}=r_{i}$, where $i=1,2,3 \ldots n$. Then, coordinates of R are $(r\cos{\theta}, r\sin{\theta})$ and of $R_{i}$ are $(r_{i}\cos{\theta},r_{i}\sin{\theta})$ where $i=1,2,3, \ldots, n$. Since $R_{i}$ lies on $a_{i}x+b_{i}y+c_{i}=0$, we can say $a_{i}r_{i}\cos{\theta}+b_{i}r_{i}\sin{\theta}+c_{i}=0$ for $i=1,2,3, \ldots, n$ $\Longrightarrow -\frac{a_{i}}{c_{i}}\cos{\theta} - \frac{b_{i}}{c_{i}}\sin{\theta} = \frac{1}{r_{i}}$, for $i=1,2,3, \ldots, n$ $\Longrightarrow \sum_{i=1}^{n}\frac{1}{r_{i}}=-(\sum_{i=1}^{n}\frac{a_{i}}{c_{i}})\cos{\theta}-(\sum_{i=1}^{n}\frac{b_{i}}{c_{i}})\sin{\theta}$ $\frac{n}{r}=-(\sum_{i=1}^{n}\frac{a_{i}}{c_{i}})\cos{\theta}-(\sum_{i=1}^{n}\frac{b_{i}}{c_{i}})\sin{\theta}$ …as given… $\Longrightarrow (\sum_{i=1}^{n}\frac{a_{i}}{c_{i}})r\cos{\theta}+(\sum_{i=1}^{n}\frac{b_{i}}{c_{i}})r\sin{\theta} + n=0$ Hence, the locus of R is $(\sum_{i=1}^{n}\frac{a_{i}}{c_{i}})x+(\sum_{i=1}^{n}\frac{b_{i}}{c_{i}})y+n=0$ which is a straight line. Problem 2: Determine all values of $\alpha$ for which the point $(\alpha,\alpha^{2})$ lies inside the triangle formed by the lines $2x+3y-1=0$, $x+2y-3=0$, $5x-6y-1=0$. Solution 2: Solving equations of the lines two at a time, we get the vertices of the given triangle as: $A(-7,5)$, $B(1/3,1/9)$ and $C(5/4, 7/8)$. So, AB is the line $2x+3y-1=0$, AC is the line $x+2y-3=0$ and BC is the line $5x-6y-1=0$ Let $P(\alpha,\alpha^{2})$ be a point inside the triangle ABC. (please do draw it on a sheet of paper, if u want to understand this solution further.) Since A and P lie on the same side of the line $5x-6y-1=0$, both $5(-7)-6(5)-1$ and $5\alpha-6\alpha^{2}-1$ must have the same sign. $\Longrightarrow 5\alpha-6\alpha^{2}-1<0$ or $6\alpha^{2}-5\alpha+1>0$ which in turn $\Longrightarrow (3\alpha-1)(2\alpha-1)>0$ which in turn $\Longrightarrow$ either $\alpha<1/3$ or $\alpha>1/2$….call this relation I. Again, since B and P lie on the same side of the line $x+2y-3=0$, $(1/3)+(2/9)-3$ and $\alpha+2\alpha^{2}-3$ have the same sign. $\Longrightarrow 2\alpha^{2}+\alpha-3<0$ and $\Longrightarrow (2\alpha+3)(\alpha-1)<0$, that is, $-3/2 <\alpha <1$…call this relation II. Lastly, since C and P lie on the same side of the line $2x+3y-1=0$, we have $2 \times (5/4) + 3 \times (7/8) -1$ and $2\alpha+3\alpha^{2}-1$ have the same sign. $\Longrightarrow 3\alpha^{2}+2\alpha-1>0$ that is $(3\alpha-1)(\alpha+1)>0$ $\alpha<-1$ or $\alpha>1/3$….call this relation III. Now, relations I, II and III hold simultaneously if $-3/2 < \alpha <-1$ or $1/2<\alpha<1$. Problem 3: A variable straight line of slope 4 intersects the hyperbola $xy=1$ at two points. Find the locus of the point which divides the line segment between these two points in the ratio $1:2$. Solution 3: Let equation of the line be $y=4x+c$ where c is a parameter. It intersects the hyperbola $xy=1$ at two points, for which $x(4x+c)=1$, that is, $\Longrightarrow 4x^{2}+cx-1=0$. Let $x_{1}$ and $x_{2}$ be the roots of the equation. Then, $x_{1}+x_{2}=-c/4$ and $x_{1}x_{2}=-1/4$. If A and B are the points of intersection of the line and the hyperbola, then the coordinates of A are $(x_{1}, \frac{1}{x_{1}})$ and that of B are $(x_{2}, \frac{1}{x_{2}})$. Let $R(h,k)$ be the point which divides AB in the ratio $1:2$, then $h=\frac{2x_{1}+x_{2}}{3}$ and $k=\frac{\frac{2}{x_{1}}+\frac{1}{x_{2}}}{3}=\frac{2x_{2}+x_{1}}{3x_{1}x_{2}}$, that is, $\Longrightarrow 2x_{1}+x_{2}=3h$…call this equation I. and $x_{1}+2x_{2}=3(-\frac{1}{4})k=(-\frac{3}{4})k$….call this equation II. Adding I and II, we get $3(x_{1}+x_{2})=3(h-\frac{k}{4})$, that is, $3(-\frac{c}{4})=3(h-\frac{k}{4}) \Longrightarrow (h-\frac{k}{4})=-\frac{c}{4}$….call this equation III. Subtracting II from I, we get $x_{1}-x_{2}=3(h+\frac{k}{4})$ $\Longrightarrow (x_{1}-x_{2})^{2}=9(h+\frac{k}{4})^{2}$ $\Longrightarrow \frac{c^{2}}{16} + 1= 9(h+\frac{k}{4})^{2}$ $\Longrightarrow (h-\frac{k}{4})^{2}+1=9(h+\frac{k}{4})^{2}$ $\Longrightarrow h^{2}-\frac{1}{2}hk+\frac{k^{2}}{16}+1=9(h^{2}+\frac{1}{2}hk+\frac{k^{2}}{16})$ $\Longrightarrow 16h^{2}+10hk+k^{2}-2=0$ so that the locus of $R(h,k)$ is $16x^{2}+10xy+y^{2}-2=0$ More later, Nalin Pithwa. ### Cartesian system and straight lines: IITJEE Mains: Problem solving skills Problem 1: The line joining $A(b\cos{\alpha},b\sin{\alpha})$ and $B(a\cos{\beta},a\sin{\beta})$ is produced to the point $M(x,y)$ so that $AM:MB=b:a$, then find the value of $x\cos{\frac{\alpha+\beta}{2}}+y\sin{\frac{\alpha+\beta}{2}}$. Solution 1: As M divides AB externally in the ratio $b:a$, we have $x=\frac{b(a\cos{\beta})-a(b\cos{\alpha})}{b-a}$ and $y=\frac{b(a\sin{\beta})-a(b\sin{\alpha})}{b-a}$ which in turn $\Longrightarrow \frac{x}{y} = \frac{\cos{\beta}-cos{\alpha}}{\sin{\beta}-\sin{\alpha}}$ $= \frac{2\sin{\frac{\alpha+\beta}{2}}\sin{\frac{\alpha-\beta}{2}}}{2\cos{\frac{\alpha+\beta}{2}}\sin{\frac{\beta-\alpha}{2}}}$ $\Longrightarrow x\cos{\frac{\alpha+\beta}{2}}+y\sin{\frac{\alpha+\beta}{2}}=0$ Problem 2: If the circumcentre of a triangle lies at the origin and the centroid in the middle point of the line joining the points $(a^{2}+1,a^{2}+1)$ and $(2a,-2a)$, then where does the orthocentre lie? Solution 2: From plane geometry, we know that the circumcentre, centroid and orthocentre of a triangle lie on a line. So, the orthocentre of the triangle lies on the line joining the circumcentre $(0,0)$ and the centroid $(\frac{(a+1)^{2}}{2},\frac{(a-1)^{2}}{2})$, that is, $y.\frac{(a+1)^{2}}{2} = x.\frac{(a-1)^{2}}{2}$, or $(a-1)^{2}x-(a+1)^{2}y=0$. That is, the orthocentre lies on this line. Problem 3: If a, b, c are unequal and different from 1 such that the points $(\frac{a^{3}}{a-1},\frac{a^{2}-3}{a-1})$, $(\frac{b^{3}}{b-1},\frac{b^{2}-3}{b-1})$ and $(\frac{c^{3}}{c-1},\frac{c^{2}-3}{c-1})$ are collinear, then which of the following option is true? a: $bc+ca+ab+abc=0$ b: $a+b+c=abc$ c: $bc+ca+ab=abc$ d: $bc+ca+ab-abc=3(a+b+c)$ Solution 3: Suppose the given points lie on the line $lx+my+n=0$ then a, b, c are the roots of the equation : $lt^{3}+m(t^{2}-3)+n(t-1)=0$, or $lt^{3}+mt^{2}+nt-(3m+n)=0$ $\Longrightarrow a+b+c=-\frac{m}{l}$ and $ab+bc+ca=\frac{n}{l}$, that is, $abc=(3m+n)/l$ Eliminating l, m, n, we get $abc=-3(a+b+c)+bc+ca+ab$ $\Longrightarrow bc+ca+ab-abc=3(a+b+c)$, that is, option (d) is the answer. Problem 4: If $p, x_{1}, x_{2}, \ldots, x_{i}, \ldots$ and $q, y_{1}, y_{2}, \ldots, y_{i}, \ldots$ are in A.P., with common difference a and b respectively, then on which line does the centre of mean position of the points $A_{i}(x_{i},y_{i})$ with $i=1,2,3 \ldots, n$ lie? Solution 4: Note: Centre of Mean Position is $(\frac{\sum{xi}}{n},\frac{\sum {yi}}{n})$. Let the coordinates of the centre of mean position of the points $A_{i}$, $i=1,2,3, \ldots,n$ be $(x,y)$ then $x=\frac{x_{1}+x_{2}+x_{3}+\ldots + x_{n}}{n}$ and $y=\frac{y_{1}+y_{2}+\ldots + y_{n}}{n}$ $\Longrightarrow x = \frac{np+a(1+2+\ldots+n)}{n}$, $y=\frac{nq+b(1+2+\ldots+n)}{n}$ $\Longrightarrow x=p+ \frac{n(n+1)}{2n}a$ and $y=q+ \frac{n(n+1)}{2n}b$ $\Longrightarrow x=p+\frac{n+1}{2}a$, and $y=q+\frac{n+1}{2}b$ $\Longrightarrow 2\frac{(x-p)}{a}=2\frac{(y-q)}{b} \Longrightarrow bx-ay=bp-aq$, that is, the CM lies on this line. Problem 5: The line L has intercepts a and b on the coordinate axes. The coordinate axes are rotated through a fixed angle, keeping the origin fixed. If p and q are the intercepts of the line L on the new axes, then what is the value of $\frac{1}{a^{2}} - \frac{1}{p^{2}} + \frac{1}{b^{2}} - \frac{1}{q^{2}}$? Solution 5: Equation of the line L in the two coordinate systems is $\frac{x}{a} + \frac{y}{b}=1$, and $\frac{X}{p} + \frac{Y}{q}=1$ where $(X,Y)$ are the new coordinate of a point $(x,y)$ when the axes are rotated through a fixed angle, keeping the origin fixed. As the length of the perpendicular from the origin has not changed. $\frac{1}{\sqrt{\frac{1}{a^{2}}+\frac{1}{b^{2}}}}=\frac{1}{\sqrt{\frac{1}{p^{2}} + \frac{1}{q^{2}}}}$ $\Longrightarrow \frac{1}{a^{2}} + \frac{1}{b^{2}} = \frac{1}{p^{2}} + \frac{1}{q^{2}}$ or $\frac{1}{a^{2}} - \frac{1}{p^{2}} + \frac{1}{b^{2}} - \frac{1}{q^{2}}=0$. So, the value is zero. Problem 6: Let O be the origin, $A(1,0)$ and $B(0,1)$ and $P(x,y)$ are points such that $xy>0$ and $x+y<1$, then which of the following options is true: a: P lies either inside the triangle OAB or in the third quadrant b: P cannot lie inside the triangle OAB c: P lies inside the triangle OAB d: P lies in the first quadrant only. Solution 6: Since $xy>0$, P either lies in the first quadrant or in the third quadrant. The inequality $x+y<1$ represents all points below the line $x+y=1$. So that $xy>0$ and $x+y<1$ imply that either P lies inside the triangle OAB or in the third quadrant. Problem 7: An equation of a line through the point $(1,2)$ whose distance from the point $A(3,1)$ has the greatest value is : option i: $y=2x$ option ii: $y=x+1$ option iii: $x+2y=5$ option iv: $y=3x-1$ Solution 7: Let the equation of the line through $(1,2)$ be $y-2=m(x-1)$. If p denotes the length of the perpendicular from $(3,1)$ on this line, then $p=|\frac{2m+1}{\sqrt{m^{2}+1}}|$ $\Longrightarrow p^{2}=\sqrt{\frac{4m^{2}+4m+1}{m^{2}+1}}=4+ \frac{4m-3}{m^{2}+1}=s$, say then $p^{2}$ is greatest if and only if s is greatest. Now, $\frac{ds}{dm} = \frac{(m^{2}+1)(4)-2m(4m-3)}{(m^{2}+1)^{2}} = \frac{-2(2m-1)(m-2)}{(m^{2}+1)^{2}}$ $\frac{ds}{dm} = 0$ so that $\Longrightarrow m = \frac{1}{2}, 2$. Also, $\frac{ds}{dm}<0$, if $m<\frac{1}{2}$, and $\frac{ds}{dm} >0$, if $1/2 and $\frac{ds}{dm} <0$, if $m>2$. So s is greatest for $m=2$. And, thus, the equation of the required line is $y=2x$. Problem 8: The points $A(-4,-1)$, $B(-2,-4)$, Slatex C(4,0)\$ and $D(2,3)$ are the vertices of a : option a: parallelogram option b: rectangle option c: rhombus option d: square. Note: more than one option may be right. Please mark all that are right. Solution 8: Mid-point of AC = $(\frac{-4+4}{2},\frac{-1+0}{2})=(0, \frac{-1}{2})$ Mid-point of BD = $(\frac{-2+2}{2},\frac{-4+3}{2})=(0,\frac{-1}{2})$ $\Longrightarrow$ the diagonals AC and BD bisect each other. $\Longrightarrow$ ABCD is a parallelogram. Next, $AC= \sqrt{(-4-4)^{2}+(-1+0)^{2}}=\sqrt{64+1}=\sqrt{65}$ and $BD=\sqrt{(-2-2)^{2}+(-4+3)^{2}}=\sqrt{16+49}=\sqrt{65}$ and since the diagonals are also equal, it is a rectangle. As $AB=\sqrt{(-4+2)^{2}+(-1+4)^{2}}=\sqrt{13}$ and $BC=\sqrt{(-2-4)^{2}+(-4)^{2}}=\sqrt{36+16}=sqrt{52}$, the adjacent sides are not equal and hence, it is neither a rhombus nor a square. Problem 9: Equations $(b-c)x+(c-a)y+(a-b)=0$ and $(b^{3}-c^{3})x+(c^{3}-a^{3})y+a^{3}-b^{3}=0$ will represent the same line if option i: $b=c$ option ii: $c=a$ option iii: $a=b$ option iv: $a+b+c=0$ Solution 9: The two lines will be identical if there exists some real number k, such that $b^{3}-c^{3}=k(b-c)$, and $c^{3}-a^{3}=k(c-a)$, and $a^{3}-b^{3}=k(a-b)$. $\Longrightarrow b-c=0$ or $b^{2}+c^{2}+bc=k$ $\Longrightarrow c-a=0$ or $c^{2}+a^{2}+ac=k$, and $\Longrightarrow a-b=0$ or $a^{2}+b^{2}+ab=k$ That is, $b=c$ or $c=a$, or $a=b$. Next, $b^{2}+c^{2}+bc=c^{2}+a^{2}+ca \Longrightarrow b^{2}-a^{2}=c(a-b)$. Hence, $a=b$, or $a+b+c=0$. Problem 10: The circumcentre of a triangle with vertices $A(a,a\tan{\alpha})$, $B(b, b\tan{\beta})$ and $C(c, c\tan{\gamma})$ lies at the origin, where $0<\alpha, \beta, \gamma < \frac{\pi}{2}$ and $\alpha + \beta + \gamma = \pi$. Show that it’s orthocentre lies on the line $4\cos{\frac{\alpha}{2}}\cos{\frac{\beta}{2}}\cos{\frac{\gamma}{2}}x-4\sin{\frac{\alpha}{2}}\sin{\frac{\beta}{2}}\sin{\frac{\gamma}{2}}y=y$ Solution 10: As the circumcentre of the triangle is at the origin O, we have $OA=OB=OC=r$, where r is the radius of the circumcircle. Hence, $OA^{2}=r^{2} \Longrightarrow a^{2}+a^{2}\tan^{2}{\alpha}=r^{2} \Longrightarrow a = r\cos{\alpha}$ Therefore, the coordinates of A are $(r\cos{\alpha},r\sin{\alpha})$. Similarly, the coordinates of B are $(r\cos{\beta},r\sin{\beta})$ and those of C are $(r\cos{\gamma},r\sin{\gamma})$. Thus, the coordinates of the centroid G of $\triangle ABC$ are $(\frac{1}{3}r(\cos{\alpha}+\cos{\beta}+\cos{\gamma}),\frac{1}{3}r(\sin{\alpha}+\sin{\beta}+\sin{\gamma}))$. Now, if $P(h,k)$ is the orthocentre of $\triangle ABC$, then from geometry, the circumcentre, centroid, and the orthocentre of a triangle lie on a line, and the slope of OG equals the slope of OP. Hence, $\frac{\sin{\alpha}+\sin{\beta}+\sin{\gamma}}{\cos{\alpha}+\cos{\beta}+\cos{\gamma}}=\frac{k}{h}$ $\Longrightarrow \frac{4\cos{(\frac{\alpha}{2})}\cos{(\frac{\beta}{2})}\cos{(\frac{\gamma}{2})}}{1+4\sin{(\frac{\alpha}{2})}\sin{(\frac{\beta}{2})}\sin{(\frac{\gamma}{2})}}= \frac{k}{h}$ because $\alpha+\beta+\gamma=\pi$. Hence, the orthocentre $P(h,k)$ lies on the line $4\cos{(\frac{\alpha}{2})}\cos{(\frac{\beta}{2})}\cos{(\frac{\gamma}{2})}x-4\sin{(\frac{}{})}\sin{(\frac{\beta}{2})}\sin{(\frac{\gamma}{2})}y=y$. Hope this gives an assorted flavour. More stuff later, Nalin Pithwa. ### IITJEE Foundation Math and PRMO (preRMO) practice: another random collection of questions Problem 1: Find the value of $\frac{x+2a}{2b--x} + \frac{x-2a}{2a+x} + \frac{4ab}{x^{2}-4b^{2}}$ when $x=\frac{ab}{a+b}$ Problem 2: Reduce the following fraction to its lowest terms: $(\frac{1}{x} + \frac{1}{y} + \frac{1}{z}) \div (\frac{x+y+z}{x^{2}+y^{2}+z^{2}-xy-yz-zx} - \frac{1}{x+y+z})+1$ Problem 3: Simplify: $\sqrt[4]{97-56\sqrt{3}}$ Problem 4: If $a+b+c+d=2s$, prove that $4(ab+cd)^{2}-(a^{2}+b^{2}-c^{2}-d^{2})^{2}=16(s-a)(s-b)(s-c)(s-d)$ Problem 5: If a, b, c are in HP, show that $(\frac{3}{a} + \frac{3}{b} - \frac{2}{c})(\frac{3}{c} + \frac{3}{b} - \frac{2}{a})+\frac{9}{b^{2}}=\frac{25}{ac}$. May u discover the joy of Math! 🙂 🙂 🙂 Nalin Pithwa. ### Solutions to Birthday Problems: IITJEE Advanced Mathematics In the following problems, each year is assumed to be consisting of 365 days (no leap year): Problem 1: What is the least number of people in a room such that it is more likely than not that at least two people will share the same birthday? Solution 1: The probability of the second person having a different birthday from the first person is $\frac{364}{365}$. The probability of the first three persons having different birthdays is $\frac{364}{365} \times \frac{363}{365}$. In this way, the probability of all n persons in a room having different birthdays is $P(n) = \frac{364}{365} \times \frac{363}{365} \times \frac{362}{365} \times \ldots \frac{365-n+1}{365}$. For the value of n, when P(n) falls just below 1/2 is the least number of people in a room when the probability of at least two people having the same birthday becomes greater than one half (that is, more likely than not). Now, one can make the following table: $\begin{tabular}{|c|c|}\hline N & P(n) \\ \hline 2 & 364/365 \\ \hline 3 & 0.9918 \\ \hline 4 & 0.9836 \\ \hline 5 & 0.9729 \\ \hline 6 & 0.9595 \\ \hline 7 & 0.9438 \\ \hline 8 & 0.9257 \\ \hline 9 & 0.9054 \\ \hline 10 & 0.8830 \\ \hline 11 & 0.8589 \\ \hline 12 & 0.8330 \\ \hline 13 & 0.8056 \\ \hline 14 & 0.7769 \\ \hline 15 & 0.7471 \\ \hline 16 & 0.7164 \\ \hline 17 & 0.6850 \\ \hline 18 &0.6531 \\ \hline 19 & 0.6209 \\ \hline 20 & 0.5886 \\ \hline 21 & 0.5563 \\ \hline 22 & 0.5258 \\ \hline 23 & 0.4956 \\ \hline \end{tabular}$ Thus, the answer is 23. One may say that during a football match with one referee, it is more likely than not that at least two people on the field have the same birthday! 🙂 🙂 🙂 Problem 2: You are in a conference. What is the least number of people in the conference (besides you) such that it is more likely than not that there is at least another person having the same birthday as yours? Solution 2: The probability of the first person having a different birthday from yours is $\frac{364}{365}$. Similarly, the probability of the first two persons not having the same birthday as yours is $\frac{(364)^{2}}{(365)^{2}}$. Thus, the probability of n persons not  having the same birthday as yours is $\frac{(364)^{n}}{(365)^{n}}$. When this value falls below 0.5, then it becomes more likely than not that at least another person has the same birthday as yours. So, the least value of n is obtained from $(\frac{364}{365})^{n}<\frac{1}{2}$. Taking log of both sides, we solve to get $n>252.65$. So, the least number of people required is 253. Problem 3: A theatre owner announces that the first person in the queue having the same birthday as the one who has already purchased a ticket will be given a free entry. Where (which position in the queue) should one stand to maximize the chance of earning a free entry? Solution 3: For the nth person to earn a free entry, first $(n-1)$ persons must have different birthdays and the nth person must have the same birthday as that of one of these previous $(n-1)$ persons. The probability of such an event can we written as $P(n) = [\frac{364}{365} \times \frac{363}{365} \times \frac{362}{365} \times \ldots \frac{365-n+2}{365}] \times \frac{n-1}{365}$ For a maximum, we need $P(n) > P(n+1)$. Alternatively, $\frac{P(n)}{P(n+1)} >1$. Using this expression for P(n), we get $\frac{365}{365-n} \times \frac{n-1}{n} >1$. Or, $n^{2}-n-365>0$. For positive n, this inequality is satisfied first for some n between 19 and 20. So, the best place in the queue to get a free entry is the 20th position. More later, Nalin Pithwa. ### Can anyone have fun with infinite series? Below is list of finitely many puzzles on infinite series to keep you a bit busy !! 🙂 Note that these puzzles do have an academic flavour, especially concepts of convergence and divergence of an infinite series. Puzzle 1: A grandmother’s vrat (fast) requires her to keep odd number of lamps of finite capacity lit in a temple at any time during 6pm to 6am the next morning. Each oil-filled lamp lasts 1 hour and it burns oil at a constant rate. She is not allowed to light any lamp after 6pm but she can light any number of lamps before 6pm and transfer oil from some to the others throughout the night while keeping odd number of lamps lit all the time. How many fully-filled oil lamps does she need to complete her vrat? Puzzle 2: Two number theorists, bored in a chemistry lab, played a game with a large flask containing 2 liters of a colourful chemical solution and an ultra-accurate pipette. The game was that they would take turns to recall a prime number p such that $p+2$ is also a prime number. Then, the first number theorist would pipette out $\frac{1}{p}$ litres of chemical and the second $\frac{1}{(p+2)}$ litres. How many times do they have to play this game to empty the flask completely? Puzzle 3: How farthest from the edge of a table can a deck of playing cards be stably overhung if the cards are stacked on top of one another? And, how many of them will be overhanging completely away from the edge of the table? Puzzle 4: Imagine a tank that can be filled with infinite taps and can be emptied with infinite drains. The taps, turned on alone, can fill the empty tank to its full capacity in 1 hour, 3 hours, 5 hours, 7 hours and so on. Likewise, the drains opened alone, can drain a full tank in 2 hours, 4 hours, 6 hours, and so on. Assume that the taps and drains are sequentially arranged in the ascending order of their filling and emptying durations. Now, starting with an empty tank, plumber A alternately turns on a tap for 1 hour and opens the drain for 1 hour, all operations done one at a time in a sequence. His sequence, by using $t_{i}$ for $i^{th}$ tap and $d_{j}$ for $j^{th}$ drain, can be written as follows: $\{ t_{1}, d_{1}, t_{2}, d_{2}, \ldots\}_{A}$. When he finishes his operation, mathematically, after using all the infinite taps and drains, he notes that the tank is filled to a certain fraction, say, $n_{A}<1$. Then, plumber B turns one tap on for 1 hour and then opens two drains for 1 hour each and repeats his sequence: $\{ (t_{1},d_{1},d_{2}), (t_{2},d_{3},d_{4}), (t_{3},d_{4},d_{5}) \ldots \}_{B}$. At the end of his (B’s) operation, he finds that the tank is filled to a fraction that is exactly half of what plumber A had filled, that is, $0.5n_{A}$. How is this possible even though both have turned on all taps for 1 hour and opened all drains for 1 hour, although in different sequences? I hope u do have fun!! -Nalin Pithwa. ### Lagrange’s Mean Value Theorem and Cauchy’s Generalized Mean Value Theorem Lagrange’s Mean Value Theorem: If a function $f(x)$ is continuous on the interval $[a,b]$ and differentiable at all interior points of the interval, there will be, within $[a,b]$, at least one point c, $a, such that $f(b)-f(a)=f^{'}(c)(b-a)$. Cauchy’s Generalized Mean Value Theorem: If $f(x)$ and $phi(x)$ are two functions continuous on an interval $[a,b]$ and differentiable within it, and $phi(x)$ does not vanish anywhere inside the interval, there will be, in $[a,b]$, a point $x=c$, $a, such that $\frac{f(b)-f(a)}{phi(b)-phi(a)} = \frac{f^{'}(c)}{phi^{'}(c)}$. Some questions based on the above: Problem 1: Form Lagrange’s formula for the function $y=\sin(x)$ on the interval $[x_{1},x_{2}]$. Problem 2: Verify the truth of Lagrange’s formula for the function $y=2x-x^{2}$ on the interval $[0,1]$. Problem 3: Applying Lagrange’s theorem, prove the inequalities: (i) $e^{x} \geq 1+x$ (ii) $\ln (1+x) , for $x>0$. (iii) $b^{n}-a^{n} for $b>a$. (iv) $\arctan(x) . Problem 4: Write the Cauchy formula for the functions $f(x)=x^{2}$, $phi(x)=x^{3}$ on the interval $[1,2]$ and find c. More churnings with calculus later! Nalin Pithwa. ### Some Applications of Derivatives — Part II Derivatives in Economics. Engineers use the terms velocity and acceleration to refer to the derivatives of functions describing motion. Economists, too, have a specialized vocabulary for rates of change and derivatives. They call them marginals. In a manufacturing operation, the cost of production c(x) is a function of x, the number of units produced. The marginal cost of production is the rate of change of cost (c) with respect to a level of production (x), so it is $dc/dx$. For example, let c(x) represent the dollars needed needed to produce x tons of steel in one week. It costs more to produce x+h units, and the cost difference, divided by h, is the average increase in cost per ton per week: $\frac{c(x+h)-c(x)}{h}=$ average increase in cost/ton/wk to produce the next h tons of steel The limit of this ratio as $h \rightarrow 0$ is the marginal cost of producing more steel when the current production level is x tons. $\frac{dc}{dx}=\lim_{h \rightarrow 0} \frac{c(x+h)-c(x)}{h}=$ marginal cost of production Sometimes, the marginal cost of production is loosely defined to be the extra cost of producing one unit: $\frac{\triangle {c}}{\triangle {x}}=\frac{c(x+1)-c(x)}{1}$ which is approximately the value of $dc/dx$ at x. To see why this is an acceptable approximation, observe that if the slope  of c does not change quickly near x, then the difference quotient will be close to its limit, the derivative $dc/dx$, even if $\triangle {x}=1$. In practice, the approximation works best for large values of x. Example: Marginal Cost Suppose it costs $c(x)=x^{3}-6x^{2}+15x$  dollars to produce x radiators when 8 to 30 radiators are produced. Your shop currently produces 10 radiators a day. About how much extra cost will it cost to produce one more radiator a day? Example : Marginal tax rate To get some feel for the language of marginal rates, consider marginal tax rates. If your marginal income tax rate is 28% and your income increases by USD 1000, you can expect to have to pay an extra USD 280 in income taxes. This does not mean that you pay $28$ percent of your entire income in taxes. It just means that at your current income level I, the rate of increase of taxes I with respect to income is $dT/dI = 0.28$. You will pay USD 0.28 out of every extra dollar you earn in taxes. Of course, if you earn a lot more, you may land in a higher tax bracket and your marginal rate will increase. Example: Marginal revenue: If $r(x) = x^{3}-3x^{2}+12x$ gives the dollar revenue from selling x thousand candy bars, $5<= x<=20$, the marginal revenue when x thousand are sold is $r^{'}(x) = \frac{d}{dx}(x^{3}-3x^{2}+12x)=3x^{2}-6x+12$. As with marginal cost, the marginal revenue function estimates the increase in revenue that will result from selling one additional unit. If you currently sell 10 thousand candy bars a week, you can expect your revenue to increase by about $r^{'}(10) = 3(100) -6(10) +12=252$ USD, if you increase sales to 11 thousand bars a week. Choosing functions to illustrate economics. In case, you are wondering why economists use polynomials of low degree to illustrate complicated phenomena like cost and revenue, here is the rationale: while formulae for real phenomena are rarely available in any given instance, the theory of  economics can still provide valuable guidance. the functions about which theory speaks can often be illustrated with low degree polynomials on relevant intervals. Cubic polynomials provide a good balance between being easy to work with and being complicated enough to illustrate important points. Ref: Calculus and Analytic Geometry by G B Thomas. More later, Nalin Pithwa ### Could a one-sided limit not exist ? Here is basic concept of limit : ### Cyclic Fractions for IITJEE foundation maths Consider the expression $\frac{1}{(a-b)(a-c)}+\frac{1}{(b-c)(b-a)}+\frac{1}{(c-a)(c-b)}$ Here, in finding the LCM of the denominators, it must be observed that there are not six different compound factors to be considered; for, three of them differ from the other three only in sign. Thus, $(a-c) = -(c-a)$ $(b-a) = -(a-b)$ $(c-b) = -(b-c)$ Hence, replacing the second factor in each denominator by its equivalent, we may write the expression in the form $-\frac{1}{(a-b)(c-b)}-\frac{1}{(b-c)(a-b)}-\frac{1}{(c-a)(b-c)}$ call this expression 1 Now, the LCM is $(b-c)(c-a)(a-b)$ and the expression is $\frac{-(b-c)-(c-a)-(a-b)}{(b-c)(c-a)(a-b)}=0$., Some Remarks: There is a peculiarity in the arrangement of this example, which is desirable to notice. In the expression 1, the letters occur in what is known as cyclic order; that is, b follows a, a follows c, c follows b. Thus, if a, b, c are arranged round the circumference of a circle, if we may start from any letter and move round in the direction of  the arrows, the other letters follow in cyclic  order; namely, abc, bca, cab. The observance of this principle is especially important in a large class of examples in which the differences of three letters are involved. Thus, we are observing cyclic order when we write $b-c$, $c-a$, $a-b$, whereas we are violating order by the use of arrangements such as $b-c$, $a-c$, $a-b$, etc. It will always be found that the work is rendered shorter and easier by following cyclic order from the beginning, and adhering to it throughout the question. Homework: (1) Find the value of $\frac{a}{(a-b)(a-c)} + \frac{b}{(b-c)(b-a)} + \frac{c}{(c-a)(c-b)}$ 2) Find the value of $\frac{b}{(a-b)(a-c)} + \frac{c}{(b-c)(b-a)} + \frac{a}{(c-a)(c-b)}$ 3) Find the value of $\frac{z}{(x-y)(x-z)} + \frac{x}{(y-z)(y-x)} + \frac{y}{(z-x)(z-y)}$ 4) Find the value of $\frac{y+z}{(x-y)(x-z)} + \frac{z+x}{(y-z)(y-x)} + \frac{x+y}{(z-x)(z-y)}$ 5) Find the value of $\frac{b-c}{(a-b)(a-c)} + \frac{c-a}{(b-c)(b-a)} + \frac{a-b}{(c-a)(c-b)}$ More later, Nalin Pithwa
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 368, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636600971221924, "perplexity": 325.0102314762307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155817.29/warc/CC-MAIN-20180919024323-20180919044323-00392.warc.gz"}
https://stephanegaiffas.github.io/publication/2017-01-01-sgd_beyond_erm
# SGD with Variance Reduction beyond Empirical Risk Minimization Published in International Conference of Monte Carlo Methods and Applications, 2017 ### M. Achab, A. Guilloux, S. Gaïffas and E. Bacry We introduce a doubly stochastic proximal gradient algorithm for optimizing a finite average of smooth convex functions, whose gradients depend on numerically expensive expectations. Indeed, the effectiveness of SGD-like algorithms relies on the assumption that the computation of a subfunction’s gradient is cheap compared to the computation of the total function’s gradient. This is true in the Empirical Risk Minimization (ERM) setting, but can be false when each subfunction depends on a sequence of examples. Our main motivation is the acceleration of the optimization of the regularized Cox partial-likelihood (the core model in survival analysis), but other settings can be considered as well. The proposed algorithm is doubly stochastic in the sense that gradient steps are done using stochastic gradient descent (SGD) with variance reduction, and the inner expectations are approximated by a Monte-Carlo Markov-Chain (MCMC) algorithm. We derive conditions on the MCMC number of iterations guaranteeing convergence, and obtain a linear rate of convergence under strong convexity and a sublinear rate without this assumption. We illustrate the fact that our algorithm improves the state-of-the-art solver for regularized Cox partial-likelihood on several datasets from survival analysis.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650055766105652, "perplexity": 563.4331563351078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00398.warc.gz"}
http://math.stackexchange.com/questions/151335/number-of-permutations-with-a-certain-number-of-fixpoints/151343
# Number of permutations with a certain number of fixpoints Given a set of $n$ mutually distinct elements, how many permutations are there such that exactly $k$ of the permuted elements stay at the same place? ## Example Let's take the set $\{A,B,C,D\}$. The original permutation is $A,B,C,D$. There are $5$ permutations where exactly $2$ elements remain at their place: ABDC ACBD DBCA I tried to find a solution by creating a recurrence relation, but I failed. Does anybody know a solution? - first try to compute the number of elements in the group of permutations of $n$ elements that don't fix any element (i.e. the case where $k=0$). –  Louis La Brocante May 29 '12 at 22:47 This could be answered by the formula of derangements. Choose the points that are going to be fixed, then choose a derangement of the others. There are $\binom nk$ ways to choose the fixed points, and there are precisely $!(n-k)$ (the "de Montmort" numbers are noted !n instead of n! for the factorial, which precisely count the number of derangements), hence you are looking at $$\binom nk \times !(n-k).$$ http://en.wikipedia.org/wiki/Derangement A formula to compute $!n$ is given by $$!n = n! \sum_{i=0}^{n} \frac{(-1)^i}{i!}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270280003547668, "perplexity": 218.12236029704687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00169-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cylinders-on-inclines.155031/
# Cylinders on inclines 1. Feb 6, 2007 1. The problem statement, all variables and given/known data This is a sample exam problem. We have an exam coming up, and I think I better know how to do this problem(the right and quick way). I'm just copying and pasting the problem. 2. Two identical metal cylinders are stacked as shown in Figure 2. The weight of each cylinder is 70 lb and the diameter of each cylinder is 3 ft. Calculate all forces acting on cylinder A. View attachment sample exam.bmp 2. Relevant equations 3. The attempt at a solution I do not know how to do these type of problems. The circle screws me up for some reason. I drew a free body diagram, and a force down on each cylinder of (W). Also, cylinder A has two forces normal to the two planes. And, cylinder B also has a force normal to the surface. They both have a force on eachother Fab. I think I'd try to sum the forces in the x and in the y direction. Fx = Facos(60) + Fbcos(45) + Facos(70) Fy = Facos(30) + Fbcos(45) + Facos(20) + 70? I'm not sure if this is even right, nor am I sure what I would do next. I don't understand how to incorporate the diameter either. 2. Feb 7, 2007 The forces acting on cylinder A are two reaction forces from the 'ground' (i.e. normal forces), the force of gravity, and the reaction force from cylinder B. Once you have identified and named these, the rest shouldn't be a problem. 3. Feb 7, 2007 ### teknodude Look at both FBD of A and B seperately Last edited: Feb 7, 2007
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132405519485474, "perplexity": 699.1554464184195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00488-ip-10-171-6-4.ec2.internal.warc.gz"}
https://community.themcacademy.co.uk/course/index.php?categoryid=2&lang=en
This is the category for all administration documents and courses, student forms etc., rather than classes themselves.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713212251663208, "perplexity": 4177.271404252402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00748.warc.gz"}
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Analytical_hierarchy.html
Analytical hierarchy In mathematical logic and descriptive set theory, the analytical hierarchy is an extension of the arithmetical hierarchy. The analytical hierarchy of formulas includes formulas in the language of second-order arithmetic, which can have quantifiers over both the set of natural numbers, , and over functions from to . The analytical hierarchy of sets classifies sets by the formulas that can be used to define them; it is the lightface version of the projective hierarchy. The analytical hierarchy of formulas The notation indicates the class of formulas in the language of second-order arithmetic with no set quantifiers. This language does not contain set parameters. The Greek letters here are lightface symbols, which indicate this choice of language. Each corresponding boldface symbol denotes the corresponding class of formulas in the extended language with a parameter for each real; see projective hierarchy for details. A formula in the language of second-order arithmetic is defined to be if it is logically equivalent to a formula of the form where is . A formula is defined to be if it is logically equivalent to a formula of the form where is . This inductive definition defines the classes and for every natural number . Because every formula has a prenex normal form, every formula in the language of second-order arithmetic is or for some . Because meaningless quantifiers can be added to any formula, once a formula is given the classification or for some it will be given the classifications and for all greater than . The analytical hierarchy of sets of natural numbers A set of natural numbers is assigned the classification if it is definable by a formula. The set is assigned the classification if it is definable by a formula. If the set is both and then it is given the additional classification . The sets are called hyperarithmetical. An alternate classification of these sets by way of iterated computable functionals is provided by hyperarithmetical theory. The analytical hierarchy on subsets of Cantor and Baire space The analytical hierarchy can be defined on any effective Polish space; the definition is particularly simple for Cantor and Baire space because they fit with the language of ordinary second-order arithmetic. Cantor space is the set of all infinite sequences of 0s and 1s; Baire space is the set of all infinite sequences of natural numbers. These are both Polish spaces. The ordinary axiomatization of second-order arithmetic uses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classification if it is definable by a formula. The set is assigned the classification if it is definable by a formula. If the set is both and then it is given the additional classification . A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function from to to the characteristic function of its graph. A subset of Baire space is given the classification , , or if and only if the corresponding subset of Cantor space has the same classification. An equivalent definition of the analytical hierarchy on Baire space is given by defining the analytical hierarchy of formulas using a functional version of second-order arithmetic; then the analytical hierarchy on subsets of Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same classifications as the first definition. Because Cantor space is homeomorphic to any finite Cartesian power of itself, and Baire space is homeomorphic to any finite Cartesian power of itself, the analytical hierarchy applies equally well to finite Cartesian power of one of these spaces. A similar extension is possible for countable powers and to products of powers of Cantor space and powers of Baire space. Extensions As is the case with the arithmetical hierarchy, a relativized version of the analytical hierarchy can be defined. The language is extended to add a constant set symbol A. A formula in the extended language is inductively defined to be or using the same inductive definition as above. Given a set , a set is defined to be if it is definable by a formula in which the symbol is interpreted as ; similar definitions for and apply. The sets that are or , for any parameter Y, are classified in the projective hierarchy. Examples • The set of all natural numbers which are indices of computable ordinals is a set which is not . • The set of elements of Cantor space which are the characteristic functions of well orderings of is a set which is not . In fact, this set is not for any element of Baire space. • If the axiom of constructibility holds then there is a subset of the product of the Baire space with itself which is and is the graph of a well ordering of Baire space. If the axiom holds then there is also a well ordering of Cantor space. Properties For each we have the following strict containments: , , , . A set that is in for some n is said to be analytical. Care is required to distinguish this usage from the term analytic set which has a different meaning. Table Lightface Boldface Σ00 = Π00 = Δ00 (sometimes the same as Δ01) Σ00 = Π00 = Δ00 (if defined) Δ01 = recursive Δ01 = clopen Σ01 = recursively enumerable Π01 = co-recursively enumerable Σ01 = G = open Π01 = F = closed Δ02 Δ02 Σ02 Π02 Σ02 = Fσ Π02 = Gδ Δ03 Δ03 Σ03 Π03 Σ03 = Gδσ Π03 = Fσδ ... ... Σ0<ω = Π0<ω = Δ0<ω = Σ10 = Π10 = Δ10 = arithmetical Σ0<ω = Π0<ω = Δ0<ω = Σ10 = Π10 = Δ10 = boldface arithmetical ... ... Δ0α (α recursive) Δ0α (α countable) Σ0α Π0α Σ0α Π0α ... ... Σ0ωCK1 = Π0ωCK1 = Δ0ωCK1 = Δ11 = hyperarithmetical Σ0ω1 = Π0ω1 = Δ0ω1 = Δ11 = B = Borel Σ11 = lightface analytic Π11 = lightface coanalytic Σ11 = A = analytic Π11 = CA = coanalytic Δ12 Δ12 Σ12 Π12 Σ12 = PCA Π12 = CPCA Δ13 Δ13 Σ13 Π13 Σ13 = PCPCA Π13 = CPCPCA ... ... Σ1<ω = Π1<ω = Δ1<ω = Σ20 = Π20 = Δ20 = analytical Σ1<ω = Π1<ω = Δ1<ω = Σ20 = Π20 = Δ20 = P = projective ... ... References • Rogers, H. (1967). Theory of recursive functions and effective computability. McGraw-Hill. • Kechris, A. (1995). Classical Descriptive Set Theory (Graduate Texts in Mathematics 156 ed.). Springer. ISBN 0-387-94374-9.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797564744949341, "perplexity": 674.2745002961244}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00621.warc.gz"}
http://eprints3.math.sci.hokudai.ac.jp/431/
# Navier-Stokes Equations in a Rotating Frame in ${\mathbb R}^3$ with Initial Data Nondecreasing at Infinity Preprint Series # 664 Giga, Yoshikazu and Inui, Katsuya and Mahalov, Alex and Matsui, Shin'ya Navier-Stokes Equations in a Rotating Frame in ${\mathbb R}^3$ with Initial Data Nondecreasing at Infinity. (2004); PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader228Kb ## Abstract Three-dimensional rotating Navier-Stokes equations are considered with a constant Coriolis parameter $\Omega$ and initial data nondecreasing at infinity. In contrast to the non-rotating case ($\Omega=0$), it is shown for the problem with rotation ($\Omega \neq 0$) that Green's function corresponding to the linear problem (Stokes + Coriolis combined operator) does not belong to $L^1({\mathbb R}^3)$. Moreover, the corresponding integral operator is unbounded in the space $L^{\infty}_{\sigma}({\mathbb R}^3)$ of solenoidal vector fields in ${\mathbb R}^3$ and the linear (Stokes+Coriolis) combined operator does not generate a semigroup in $L^{\infty}_{\sigma}({\mathbb R}^3)$. Local in time, uniform in $\Omega$ unique solvability of the rotating Navier-Stokes equations is proven for initial velocity fields in the space $L^{\infty}_{\sigma,a}({\mathbb R}^3)$ which consists of $L^{\infty}$ solenoidal vector fields satisfying vertical averaging property such that their baroclinic component belongs to a homogeneous Besov space ${\dot B}_{\infty,1}^0$ which is smaller than $L^\infty$ but still contains various periodic and almost periodic functions. This restriction of initial data to $L^{\infty}_{\sigma,a}({\mathbb R}^3)$ which is a subspace of $L^{\infty}_{\sigma}({\mathbb R}^3)$ is essential for the combined linear operator (Stokes + Coriolis) to generate a semigroup. The proof of uniform in $\Omega$ local in time unique solvability requires detailed study of the symbol of this semigroup and obtaining uniform in $\Omega$ estimates of the corresponding operator norms in Banach spaces. Using the rotation transformation, we also obtain local in time, uniform in $\Omega$ solvability of the classical 3D Navier-Stokes equations in ${\mathbb R}^3$ with initial velocity and vorticity of the form $\mbox{\bf{V}}(0)=\tilde{\mbox{\bf{V}}}_0(y) + \frac{\Omega}{2} e_3 \times y$, $\mbox{curl} \mbox{\bf{V}}(0)=\mbox{curl} \tilde{\mbox{\bf{V}}}_0(y) + \Omega e_3$ where $\tilde{\mbox{\bf{V}}}_0(y) \in L^{\infty}_{\sigma,a}({\mathbb R}^3)$. Item Type: Preprint 60 copies needed Rotating Navier-Stokes equations, nondecreasing initial data, homogeneous Besov spaces, Riesz operators. 35-xx PARTIAL DIFFERENTIAL EQUATIONS 431
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993682861328125, "perplexity": 506.4059216111334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806543.24/warc/CC-MAIN-20171122084446-20171122104446-00691.warc.gz"}
http://kiranruth.blogspot.com/2010/11/what-idea-sir-ji.html
## Sunday, November 14, 2010 ### What an idea sir-ji ? At times i wonder how an idea takes shape. An idea can never be completely owned by one person . A complete idea is something that is partly owned by the circumstances the idea dawned in . It baffles me how  some think they completely own an idea. An organization revolves on one or more idea and the people who actually run an organizations are mere catalysts that fasten the clarity of an idea . Everyone can make a choice, the choice is simple . Choose to persue the idea or rubbish it . Clarity of an idea rests on the beholder of the idea ! An idea grows on itself , at times it branches out and roots so deep , it becomes complicated . This point the idea exists by itself . The beholder merely become the seed . I call it the the disposition pivot . At this disposition pivot , the beholder is no more than a catalyst . The human instinct and circumstance decide the positivity or negativity of the catalyst .Once an idea has taken shape, it defines who you are .It defines who you will be. I wait for an idea. An idea that'll allow me to prove my existence, an idea that not required to make a dent in the universe but at-least scratch it. If an idea could solve the universal puzzle then the meaning of the bigger truth - the original idea would be clear . We exists in an idea , That defines us , we are responsible of creating an infinite chains and branches, the purpose of an idea is more important than the idea itself. The idea of this post ? ......No idea :D
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438509106636047, "perplexity": 1458.1528403399602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00370-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/accompanying-diagram-shows-2-input-gate-left-followed-2-input-gate-right-followed-inverter-q2111051
The accompanying diagram above shows a 2-input OR gate on the left followed by a 2-input AND gate on the right followed by an Inverter (NOT gate) on the right of the diagram. The OR gate has one input labeled �A� and the second input is shown connected to ground (GND). The output of the OR gate goes to the first input of the AND gate. An input Labeled �B� goes to the second input of the AND gate. The output of the AND gate goes to the input of the NOT gate, and the output of the NOT gate is the output of the circuit. Redraw the circuit using a single 2-input gate.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374080657958984, "perplexity": 572.411582926087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00155-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/question-about-spherically-symmetric-charged-objects.336066/
Question about spherically symmetric charged objects 1. Sep 9, 2009 simpleton Hi, I would like to ask a question about spherically symmetric charged objects. My teacher told me that you can treat spherically symmetric charged objects at point charges. However, my teacher did not prove it. I guess you have to integrate every small volume on the spherically symmetric charge and find its contribution to the force, but I am not sure how to do that. Therefore, does anyone know how to prove that, given a spherically symmetric object of radius R and charge density rho, if I place a test charge q x metres away from the centre of the charge, the force experienced by the test charge is k*(4/3*pi*R^2*rho)*q/R^2, where k is 1/(4*pi*epilson-nought). 2. Sep 10, 2009 gabbagabbahey Assuming you are only interested in points outside the sphere, then your teacher is correct. To show it, just use Gauss' Law....can you think of a Gaussian surface that will allow you to pull |E| outside of the integral? 3. Sep 10, 2009 simpleton Oh right! I think you mean integral(E dA) = Q/epilson-nought. If my imaginary surface is a sphere, then I can use symmetry arguments to say that E is constant, so I can pull E out. E*integral (dA) = Q/epilson-nought If I take the imaginary sphere to have a radius x and has its centre at its origin, then the area of the imaginary surface is 4*pi*x^2. Thus: E*4*pi*x^2 = Q/epilson-nought E = Q/(4*pi*epilson-nought*x^2) = k*Q/(x^2) So the force on a test charge q' is k*q*q'/(x^2). Thanks a lot! :) 4. Sep 10, 2009 Bob S Is your spherically symmetric object a dielectric or a conductor? Is the charge density a surface charge density or volume charge density? 5. Sep 10, 2009 simpleton If I understand correctly, dielectric is an insulator, and yes, the object I am talking about is an insulator, because if it is a conductor, all the charges will accumulate on the surface. EDIT: Can I know whether I should post such questions on this forum or on the homework forum? I have many more such questions Last edited: Sep 10, 2009 Similar Discussions: Question about spherically symmetric charged objects
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056621193885803, "perplexity": 479.49286411796203}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00118.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-7-exponents-and-exponential-functions-7-2-scientific-notation-practice-and-problem-solving-exercises-page-424/18
## Algebra 1 A number in scientific notation is written as the product of two factors in the form $a×10^n$, where $n$ is an integer and $1\leq |a| <10$. The number $9.54\times10^{15}$ meets both of these criteria, as $9.54$ is between $1$ and $10$, and $15$ is an integer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938827753067017, "perplexity": 47.53576955064734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865023.41/warc/CC-MAIN-20180523004548-20180523024548-00148.warc.gz"}
http://mathhelpforum.com/differential-equations/194955-solving-dy-dx-e-x-y-print.html
# Solving dy/dx = e^(x+y) • Jan 5th 2012, 04:14 PM M.R Solving dy/dx = e^(x+y) Hi, I am trying to solve the following differential equation: $\frac {dy}{dx} = e^{x+y}$ Now: $\frac {dy}{dx} = e^x e^y$ $\frac {1}{e^y} dy = e^x dx$ $\int \frac {1}{e^y} dy = \int e^x dx$ $-e^{-y} = e^x + C$ $ln(e^{-y^{-1})} = ln(e^x + C)$ $\frac{-1}{y} = x + ln(C)$ $y = \frac {1}{-x - ln(C)}$ But the answer in the book is shown as: $e^{x+y}+Ce^y+1=0$ Where am I going wrong? • Jan 5th 2012, 04:22 PM pickslides Re: Differential equation Maybe you are correct, at this step $-e^{-y} = e^x + C$ , multiply both sides through by $e^{y}$ What do you get? • Jan 5th 2012, 04:24 PM ILikeSerena Re: Differential equation Quote: Originally Posted by M.R $ln(e^{-y^{-1})} = ln(e^x + C)$ $\frac{-1}{y} = x + ln(C)$ This is not right. ln(ab) = ln(a) + ln(b) ln(a + b) ≠ ln(a) + ln(b) Quote: Originally Posted by M.R $-e^{-y} = e^x + C$ What do you get if you multiply left and right with $e^y$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618055820465088, "perplexity": 3451.0883327069973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806708.81/warc/CC-MAIN-20171122233044-20171123013044-00138.warc.gz"}
http://cdsweb.cern.ch/collection/ATLAS%20Theses?ln=ru&as=1
ATLAS Theses Последние добавления: 2019-09-05 14:32 Precizno merenje narusenja $CP$ simetrije u raspadu $B_s^0→J/ψϕ$ u ATLAS eksperimentu / Agatonovic-Jovin, Tatjana The violation of charge-parity $(CP)$ symmetry is one of the most interesting phenomena and the greatest puzzles of modern particle physics [...] CERN-THESIS-2016-430 - 200 p. Full text - Full text - Full text 2019-09-04 00:31 Results of the 2018 ATLAS sTGC Test Beam and Internal Strip Alignment of sTGC Detectors / Carlson, Evan Michael Over the course of the next ten years, the LHC will undergo upgrades that will more than triple its current luminosity [...] CERN-THESIS-2019-110 - Full text 2019-08-29 18:06 Observation of the electroweak production of two $W$ bosons with the same electric charge in association with two jets in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector / Duffield, Emily Marie This dissertation presents the observation of $W^{\pm}W^{\pm}$ electroweak production in proton-proton collisions with a center-of-mass energy of 13 TeV at the Large Hadron Collider using the ATLAS detector [...] CERN-THESIS-2019-109 - Full text 2019-08-27 15:52 Test beam studies of pixel detector prototypes for the ATLAS-Experiment at the High Luminosity Large Hadron Collider / Bisanz, Tobias The upgrade of the Large Hadron Collider (LHC) in the mid-2020’s to the High Luminosity Large Hadron Collider will provide large amounts of data, enabling precision measurements of Standard Model processes and searches for new physics [...] CERN-THESIS-2018-444 II.Physik-UniGö-Diss-2018/03. - Göttingen : SUB Georg-August-Universität Göttingen, 2019-06-26. Full text 2019-08-23 21:51 Calibration Studies of the Front-End Electronics for the ATLAS New Small Wheel Project / Chen, Bohan To continue to probe new avenues of physics, the Large Hadron Collider (LHC) will see a series of upgrades starting in 2019 that will see the luminosity surpass the design specifications [...] CERN-THESIS-2018-443 - 100 p. Full text 2019-08-20 00:04 Utilizing Electrons in the Search for Associated Higgs Boson Production with the ATLAS Detector: Higgs Decaying to a Tau Pair and Vector Boson Decaying Leptonically / Thais, Savannah Jennifer The Higgs boson was discovered by the ATLAS and CMS collaborations in 2012 using data from $\sqrt{s}$=8TeV proton-proton collisions at the LHC [...] CERN-THESIS-2019-105 - 205 p. Full text - Full text 2019-08-08 20:09 Search for long-lived particles decaying to oppositely charged lepton pairs with the ATLAS experiment at $\sqrt{s} =$ 13 TeV / Krauss, Dominik In this thesis, a search for new long-lived, massive particles decaying into oppositely charged $ee$, $e\mu$ or $\mu\mu$ pairs with the ATLAS experiment at the Large Hadron Collider is presented [...] CERN-THESIS-2019-101 - 182 p. Full text 2019-08-06 21:38 Studies of the Higgs Boson using the $H\rightarrow ZZ \rightarrow 4l$ decay channel with the ATLAS detector at the LHC / Garay Walls, Francisca CERN-THESIS-2016-429 - Full text 2019-08-04 04:29 Searching for Supersymmetry in Fully Hadronic Final States with the ATLAS Experiment / Olsson, Mats Joakim Robert The Large Hadron Collider (LHC) and its experiments were built to explore fundamental questions of particle physics via proton-proton collisions at unprecedented center-of-mass energies, thus providing a unique environment for testing the Standard Model (SM) at the electroweak scale and searching fo [...] CERN-THESIS-2018-440 - 256 p. Full text 2019-07-26 15:39 Targeting Natural Supersymmetry with Top Quarks / Herwig, Theodor Christian This thesis describes a search for natural supersymmetry via the production of light top squarks (stops) with the ATLAS experiment, using 13 TeV proton-proton collision data delivered by the Large Hadron Collider [...] CERN-THESIS-2019-092 - 305 p. Fulltext
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942190647125244, "perplexity": 3361.6418383866007}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573011.59/warc/CC-MAIN-20190917020816-20190917042816-00249.warc.gz"}
http://mathoverflow.net/questions/28992/a-question-about-iwasawa-theory?sort=oldest
# A question about Iwasawa Theory I am just reading about Iwasawa theory about Coates and Sujatha's book on Iwasawa Theory. I was wondering that since Iwasawa thought about the whole theory from the analogy of curves over finite fields, so what should be the analog of the module $U_\infty$/$C_\infty$ in the curve case (if there is any) where $U_\infty$ is the inverse system of local units and $C_\infty$ is the cyclotomic units. - An interview with Iwasawa: math.washington.edu/~greenber/IwInt.html –  Thomas Riepe Jun 21 '10 at 20:46 Ok that changes some things. But I still will like to know if there is an analogy of the above module in the curve case –  Arijit Jun 21 '10 at 21:55 A historial article and a book by Greenberg: math.washington.edu/~greenber/iwhi.ps math.washington.edu/~greenber/book.pdf –  Thomas Riepe Jun 28 '10 at 12:16 Thanks a lot, Thomas. –  Arijit Jun 28 '10 at 16:56 There is a very close analogy but to unravel it requires some work. So take $X$ a smooth curve over $\mathbb F_{\ell}$ (more generally you could take $X$ a scheme over $\mathbb F_{\ell}$) and let $\mathscr F$ be a smooth sheaf of $\mathbb Q_{p}$-vector spaces on $X$ (you could be much more general in your choice of coefficient ring, and indeed, I think you might need to consider more general coefficient rings in order to really grasp the analogy, but that will do for the moment). Moreover, we will assume for simplicity that $\mathscr F$ comes from a motive over $X$, a sentence which will remain vague but aims to convey the idea that $\mathscr F$ has geometric origin. Then the cohomology complex $R\Gamma(X,\mathscr F)$ is a perfect complex so it has a determinant $D$. This complex fits in a exact triangle $$R\Gamma(X,\mathscr F)\rightarrow R\Gamma(X\otimes\bar{\mathbb F}_{\ell},\mathscr F)\rightarrow R\Gamma(X\otimes\bar{\mathbb F}_{\ell},\mathscr F)$$ Here the very important fact to understand in order to grasp the analogy is that the second arrow is given by $Fr(\ell)-1$. This exact triangle induces an isomorphism $f$ of $D$ with $\mathbb Q_{p}$. There is conjecturally another such isomorphism. Assume that the action of the Frobenius $Fr(\ell)$ acts semi-simply on $H^{i}(\bar{X},\mathscr F)$ for all $i$ (this is widely believed under our hypothesis on $\mathscr F$). Then degeneracy of a the spectral sequence $H^{i}(\mathbb F_{\ell},H^{j}(X\otimes\bar{\mathbb F}_{\ell},\mathscr F))$ gives an isomorphism $g$ between $D$ and $\mathbb Q_{p}$. Now, consider $gf^{-1}(1)$. This happens to be the residue at 1 of the zeta function of $X$. What has all this to do with units in number fields? Change setting a bit and take $X_{n}=\operatorname{Spec}\mathbb Z[1/p,\zeta_{p^{n}}]$ and $\mathscr F=\mathbb Z(1)$. We would like to carry the same procedure as above but we can't, because we are lacking crucially the exact triangle involved in the definition of the isomorphism $f$. Nonetheless, there is a significantly more sophisticate way to construct a suitable $f_{n}$ for all $n$ and it turns out that this construction will crucially involve $U_{\infty}/C_{\infty}$. So to sum up, the analog of $U_{\infty}/C_{\infty}$ for curves over finite fields is none other than the Frobenius morphism $Fr(\ell)-1$. You may know that the cyclotomic units form an Euler system, that is to say that they satisfy relations involving corestriction and the characteristic polynomial of the Frobenius morphisms. This fact is I believe what led K.Kato to describe the analogy above. You can read about all this in much much greater details in the contribution of Kato in the volume Arithmetic Algebraic Geometry (Springer Lecture Notes 1553). - The link to Kato's lecture etc.: mathoverflow.net/questions/6928/how-do-we-study-iwasawa-theory/… –  Thomas Riepe Jun 22 '10 at 10:40 I guess I dont have enough background to read that book. So I dont know whether its a relevant question or not. So what is the analog of Coleman's power series map in the case that you explained. I guess a better but a vague question: what is the correct way to look at the Coleman map? –  Arijit Jun 22 '10 at 16:17 I think this is stretching it, but the Coleman map is the collections of the fn, so in a sense the analog of the Coleman map in the geometric case is the isomorphism f. What is the correct way to look at the Coleman map is an excellent question, which admits a precise albeit technical question: the Coleman map is an instance of the so-called epsilon morphism. You could read for instance Fukaya-Kato on this. It is natural to feel intimated by the level of Fukaya-Kato or Kato's lecture, but you should give it a try once in a while: you will learn a lot from them. –  Olivier Jun 22 '10 at 17:12 Thanks a lot Olivier. –  Arijit Jun 22 '10 at 23:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905415415763855, "perplexity": 244.38275323714157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928102.74/warc/CC-MAIN-20150521113208-00081-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=2005&number=1058
WIAS Preprint No. 1058, (2005) Approximate approximations from scattered data Authors • Lanzara, Flavia • Schmidt, Gunther 2010 Mathematics Subject Classification • 41A30 65D15 41A63 41A25 Keywords • scattered data quasi-interpolation, cubature of integral operators, multivariate approximation, error estimates DOI 10.20347/WIAS.PREPRINT.1058 Abstract The aim of this paper is to extend the approximate quasi-interpolation on a uniform grid by dilated shifts of a smooth and rapidly decaying function on a uniform grid to scattered data quasi-interpolation. It is shown that high order approximation of smooth functions up to some prescribed accuracy is possible, if the basis functions, which are centered at the scattered nodes, are multiplied by suitable polynomials such that their sum is an approximate partition of unity. For Gaussian functions we propose a method to construct the approximate partition of unity and describe the application of the new quasi-interpolation approach to the cubature of multi-dimensional integral operators. Appeared in • J. Approx. Theory, 145 (2007), pp. 141--170
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251689910888672, "perplexity": 1021.8205168931845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00739.warc.gz"}
http://mathhelpforum.com/pre-calculus/190024-equation-manipulation.html
Math Help - Equation manipulation 1. Equation manipulation I was hoping that some of you with a lot more experience than me would check my work below for any blatant errors or incorrect assumptions, I have marked the parts I'm unsure about. I have deliberately included every step for clarity. Two equations: $b=u^2+4uv+2v^2$ ......... (1) $a^2+c^2=2(u^2+2uv+2v^2)^2$............ (2) Restrictions: u and v are integers $u>v\sqrt{2}>0$ a, b and c are integers a > b > c > 0 I am trying to prove, given the 2 equations and restrictions, that there are no valid values for a, b or c. Proof that any one of them can't be an integer within the restrictions is enough. From (2) $\left(\dfrac{a+c}{2}\right)^2+\left(\dfrac{a-c}{2}\right)^2=(u^2+2uv+2v^2)^2$ manipulating the RHS (I'll call this Step 1 for a later question) $\left(\dfrac{a+c}{2}\right)^2+\left(\dfrac{a-c}{2}\right)^2=(u^2+2v^2 )^2+4uv(u^2+uv+2v^2)$ $let\ \ \ \ \ \left(\dfrac{a+c}{2}\right)^2=(u^2+2v^2 )^2\ \ \ \ \ \ \ \ \ \ \ \ (3)$ $and\ \ \ \ \left(\dfrac{a-c}{2}\right)^2=4uv(u^2+uv+2v^2)\ \ \ \ \ \ \ (4)$ everything ok so far? (I think the min ratio of u:v has increased here, but not important at the minute) from (3) .... $(a+c)^2=4(u^2+2v^2 )^2\ \ \ \ \ \ \ \ (5)$ $a+c=2(u^2+2v^2 )\ \ \ \ \ \ \ \ (6)$ from (4) .... $(a-c)^2=16uv(u^2+uv+2v^2)$ manipulating LHS $(a+c)^2-4ac=16uv(u^2+uv+2v^2)\ \ \ \ \ \ \ (7)$ $(5)-(7)\ \ \ 4ac=4(u^4-4u^3v-8uv^3+4v^4)$ $ac=u^4-4u^3v-8uv^3+4v^4\ \ \ \ \ \ \ \ (8)$ from (6) .... $c=2(u^2+2v^2 )-a$ substituting this in (8) ... $(2(u^2+2v^2 )-a)a=u^4-4u^3v-8uv^3+4v^4$ $a^2-2(u^2+2v^2 )a+(u^4-4u^3v-8uv^3+4v^4)=0$ $a=(u^2+2v^2 )+2\sqrt{u^3v+u^2v^2+2uv^3}$ for $a$ to be +ve integer $u^3v+u^2v^2+2uv^3$ must be a perfect square. $let\ \ \ z^2=u^3v+u^2v^2+2uv^3$ no manipulation here, just grouping... $z^2=(uv)^2+(u^2+2v^2)uv$ $(z+uv)(z-uv)=(u^2+2v^2)uv$ $so\ either\ z+uv=uv\ \ \ \ \ \ z=0$ $or\ \ z-uv=uv\ \ \ \ \ z=2uv$ are these two roots attained correctly? $z=0\implies\ a=u^2+2v^2\implies\ (from\ (6))\ c=u^2+2v^2\ \ \therefore\ a=c\ so\ z\not=0$ $z=2uv\implies\ a=(u^2+2v^2 )+4uv\ \ \therefore\ a=b\ so\ z\not=2uv$ So this proves that $a$ cannot have a value that fits within the given constraints. Question about Step 1. Does what I have shown only prove there's no valid $a$ when the equations are manipulated as in Step 1, or does it prove it conclusively? If not conclusively, there are infinite ways of manipulating the equation at that stage $\left(i.e.\ \ \ (u^{100}+v^{250})^2+whatever\right)$ If this is the case, is there any way to prove conclusively what I'm attempting? 2. Re: Equation manipulation $a+c = 2(u^2 + 2uv + 2v^2)^2$ divide both sides by 2 ... $\frac{a+c}{2} = (u^2 + 2uv + 2v^2)^2$ ... now, how did this next equation come about? From (2) $\left(\frac{a+c}{2}\right)^2 + \left(\frac{a-c}{2}\right)^2 = (u^2 + 2uv + 2v^2)^2$ are you saying $\frac{a+c}{2} = \left(\frac{a+c}{2}\right)^2 + \left(\frac{a-c}{2}\right)^2$ ? 3. Re: Equation manipulation sigh, sorry, I checked and rechecked the op and missed that typo I have edited the 2nd equation to the correct $a^2+c^2=2(u^2+2uv+2v^2)^2$............ (2) sorry again
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814667820930481, "perplexity": 877.8821174055695}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299877.5/warc/CC-MAIN-20150323172139-00020-ip-10-168-14-71.ec2.internal.warc.gz"}
https://brilliant.org/problems/mathematical-expressions/
# Mathematical Expressions When writing a math expression, any time there is an open bracket "$$($$", it is eventually followed by a closed bracket "$$)$$". When we have a complicated expression, there may be several brackets nested amongst each other, such as in the expression $$(x+1)*((x-2) + 3(x-4)\times(x^2 + 7\times(3x + 4)))$$. If we removed all the symbols other than the brackers from the expression, we would be left with the arrangement $$()(()()(())).$$ For any arrangement of brackets, it could have come from a valid mathematical expression if and only if for every place in the sequence, the number of open brackets before that place is at least as large as the number of closed brackets. If $$34$$ open brackets and $$34$$ closed brackets are randomly arranged, the probability that the resulting arrangement could have come from a valid mathematical expression can be expressed as $$\frac{a}{b}$$ where $$a$$ and $$b$$ are coprime positive integers. What is the value of $$a + b$$? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282261729240417, "perplexity": 151.01458925155148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00496.warc.gz"}
https://en.zdam.xyz/problem/12992/
#### Problem 35E 35. In the study of ecosystems, predator-prey models are often used to study the interaction between species. Consider populations of tundra wolves, given by $W(t)$, and caribou, given by $C(t)$, in northern Canada. The interaction has been modeled by the equations $$\frac{d C}{d t}=a C-b C W \quad \frac{d W}{d t}=-c W+d C W$$ (a) What values of $d C / d t$ and $d W / d t$ correspond to stable populations? (b) How would the statement 'The caribou go extinct” be represented mathematically?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93108731508255, "perplexity": 1832.9875428755377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00615.warc.gz"}
http://mathhelpforum.com/calculus/164757-integrating-enclosed-area.html
# Math Help - Integrating enclosed area 1. ## Integrating enclosed area Calculate the area enclosed by x = 9-y^2 and x = 5 in two ways: as an integral along the y-axis and as an integral along the x-axis. Please tell me if my figure is correct, and also if the shaded area is also the are I am looking for. Also is there any formulas I need to use for this problem ? How do I do this both ways ? Thank You Attached Thumbnails 2. Yes, your graph is fine. To integrate over the x-axis, you need $\displaystyle\int_{5}^9f(x)dx$ so $x=9-y^2\Rightarrow\ y^2=9-x\Rightarrow\ y=\pm\sqrt{9-x}=f(x)$ Take the positive one and double your result as the x-axis is an axis of symmetry. $2\displaystyle\int_{5}^9\left(9-x\right)^{0.5}dx$ To integrate over the y-axis, you could integrate $f(y)$ from $y=-3$ to $y=3.$ If you like, again use the x-axis as an axis of symmetry and double the integral from $y=0$ to $y=3.$ This integral includes the unshaded part against the y-axis, so you have a few ways to cope with that. The easiest way is to subtract 5 from $x=f(y)$ $x-5=f(y)-5=4-y^2$ The new function is $x=4-y^2$ so the limits of integration become $\pm2$ You can then calculate $2\displaystyle\int_{0}^2\left(4-y^2\right)}dy$ 3. Thank You very much. I'll give it a shot. 4. The first integral can be evaluated by making a substitution, while the 2nd one doesn't require any substitution. 5. Thank you for all you help. 6. Ok so I solved the first one I get .5. I let u = 9-x , du = -dx so I multiplied by a negative inside the integral and a negative outside. then I get, -2(.5(9-x)^-.5). Then using the fundemental theorem of calculus I used the limts 5 to 9 and my final answer was .5. is this correct ? 7. Originally Posted by wair Ok so I solved the first one I get .5. I let u = 9-x , du = -dx so I multiplied by a negative inside the integral and a negative outside. then I get, -2(.5(9-x)^-.5). Then using the fundemental theorem of calculus I used the limts 5 to 9 and my final answer was .5. is this correct ? Not quite, you differentiate to introduce the substitution, but you are also differentiating when you should be integrating! $u=9-x\Rightarrow\ du=-dx\rightarrow\ dx=-du$ $x=5\Rightarrow\ u=4$ $x=9\Rightarrow\ u=0$ the integral becomes $\displaystyle\ -2\int_{4}^0u^{0.5}}du=(-2)\left[-\int_{0}^4u^{0.5}}du\right]=2\int_{0}^4u^{0.5}}du$ $=2\displaystyle\left[\frac{u^{\frac{3}{2}}}{\frac{3}{2}}\right]$ from u=0 to u=4 8. so I just had to chnage my limits ? and then invert them to get rid of the what was it called "signed area" ? But I don't understand how you got the last step. wouldn't it be .5(u)^-.5 ? 9. Originally Posted by wair so I just had to chnage my limits ? and then invert them to get rid of the what was it called "signed area" ? But I don't understand how you got the last step. wouldn't it be .5(u)^-.5 ? No, that's what you get when you differentiate. $\displaystyle\frac{d}{du}u^{0.5}=0.5u^{-0.5}$ But $\displaystyle\int{u^{0.5}}du=\frac{u^{1.5}}{1.5}+c$ because $\displaystyle\frac{d}{du}\left[\frac{u^{1.5}}{1.5}+c\right]=\frac{1.5}{1.5}u^{0.5}$ To integrate, apply differentiation in reverse. 10. Oh right right. I understand now thank you . 11. Ok so for the first one I get -21.1. (1/1.5)(9-x)^1.5. 2(7.45)-2(18)=-21.1. Is that correct ? 12. I get 5.3 for the second one. shouldn't I get the same answer for both ? 13. Originally Posted by wair Ok so for the first one I get -21.1. (1/1.5)(9-x)^1.5. 2(7.45)-2(18)=-21.1. Is that correct ? If you work with 9-x instead of u, then there is no need to change the limits of integration. $2\displaystyle\int_{x=5}^{x=9}{(9-x)^{0.5}}dx=-2\int_{x=5}^{x=9}{(9-x)^{0.5}}d(9-x)$ $=2\displaystyle\int_{x=9}^5{(9-x)^{0.5}}d(9-x)=2\frac{(9-x)^{1.5}}{1.5}$ from x=9 to 5 (start calculations at x=5) $=2\displaystyle\frac{(9-5)^{1.5}-(5-5)^{1.5}}{1.5}=2\frac{4^{1.5}}{1.5}=2\frac{8}{1.5} =2\frac{16}{3}=\frac{32}{3}$ You must get a positive value for area, so as our graph is symmetrical about the x-axis, we integrate the part above the x-axis and double it. Using the u-substitution, we get $\displaystyle\ 2\int_{0}^4{u^{0.5}}du=2\left[\frac{u^{1.5}}{1.5}\right]$ from u=0 to u=4 (start calculations at u=4) $\displaystyle\ =2\frac{4^{1.5}}{1.5}-0=2\frac{8}{1.5}=\frac{16}{1.5}=\frac{32}{3}$ For the 2nd integral.. $\displaystyle\ 2\int_{0}^2{\left(4-y^2\right)}dy=2\int_{0}^2{4}dy-2\int_{0}^2{y^2}dy=2\left[4y-\frac{y^3}{3}\right]$ evaluated from y=0 to y=2 (start calculations at y=2) comes out as the same value. Review your calculations for both integrals. 14. Thank you very much. So if I want to use the new limits I don't need to replacex u with what u equals. And both values must be the same. OK Thank you 15. Originally Posted by wair Thank you very much. So if I want to use the new limits I don't need to replacex u with what u equals. And both values must be the same. OK Thank you I think you mean.... "don't need to replace u with 9-x". Yes, it is simplest to work with "u", having made the substitution, then integrate f(u)du using the "u" limits. This area you are now calculating is on a different graph but evaluating the new "u" integral will give the same area as the shaded region on the original graph. If you prefer not to change the limits, here's how to do it.... $\displaystyle\ -2\int_{x=5}^{x=9}{u^{\frac{1}{2}}}du=-2\left[\frac{u^{\frac{3}{2}}}{\frac{3}{2}}\right]$ from x=5 to x=9 (start calculations at x=9) $=-\displaystyle\frac{4}{3} (9-x)^{\frac{3}{2}}$ from x=5 to x=9 $=-\displaystyle\frac{4}{3}\left[ (9-9)^{1.5}-(9-5)^{1.5}\right]=\frac{32}{3}$ You've got to practice. Keep going until you can reproduce the solution for both integrals.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833219051361084, "perplexity": 675.9462067192003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00218-ip-10-239-7-51.ec2.internal.warc.gz"}
http://categoricalexamples.com/cgi-bin/py/theorem.py?id=61
# Stacks 01TW (1) theorem Let $X$, $Y$ be schemes, and possibly some arrows, satisfying the following conditions: • $f$ locally of finite presentation Then • $f$ locally of finite type
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641385674476624, "perplexity": 2115.15937562536}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703549416.62/warc/CC-MAIN-20210124141945-20210124171945-00587.warc.gz"}
http://harvard.voxcharta.org/tag/rotation-curves/
# Posts Tagged rotation curves ## Recent Postings from rotation curves ### Formation and evolution of blue compact dwarfs: The origin of their steep rotation curves The origin of the observed steep rotation curves of blue compact dwarf galaxies (BCDs) remains largely unexplained by theoretical models of BCD formation. We therefore investigate the rotation curves in BCDs formed from mergers between gas- rich dwarf irregular galaxies based on the results of numerical simulations for BCD formation. The principal results are as follows. The dark matter of merging dwarf irregulars undergoes a central concentration so that the central density can become up to 6 times higher than those of the initial dwarf irregulars. However, the more compact dark matter halo alone can not reproduce the gradient differences observed between dwarf irregulars and BCDs. We provide further support that the central concentration of gas due to rapid gas-transfer to the central regions of dwarf-dwarf mergers is responsible for the observed difference in rotation curve gradients. The BCDs with central gas concentration formed from merging can thus show steeply rising rotation curves in their central regions. Such gas concentration is also responsible for central starbursts of BCDs and the high central surface brightness and is consistent with previous BCD studies. We discuss the relationship between rotational velocity gradient and surface brightness, the dependence of BCD rotation curves on star formation threshold density, progenitor initial profile, interaction type and merger mass ratio, as well as potential evolutionary links between dwarf irregulars, BCDs and compact dwarf irregulars. ### Formation and evolution of blue compact dwarfs: The origin of their steep rotation curves [Replacement] The origin of the observed steep rotation curves of blue compact dwarf galaxies (BCDs) remains largely unexplained by theoretical models of BCD formation. We therefore investigate the rotation curves in BCDs formed from mergers between gas- rich dwarf irregular galaxies based on the results of numerical simulations for BCD formation. The principal results are as follows. The dark matter of merging dwarf irregulars undergoes a central concentration so that the central density can become up to 6 times higher than those of the initial dwarf irregulars. However, the more compact dark matter halo alone can not reproduce the gradient differences observed between dwarf irregulars and BCDs. We provide further support that the central concentration of gas due to rapid gas-transfer to the central regions of dwarf-dwarf mergers is responsible for the observed difference in rotation curve gradients. The BCDs with central gas concentration formed from merging can thus show steeply rising rotation curves in their central regions. Such gas concentration is also responsible for central starbursts of BCDs and the high central surface brightness and is consistent with previous BCD studies. We discuss the relationship between rotational velocity gradient and surface brightness, the dependence of BCD rotation curves on star formation threshold density, progenitor initial profile, interaction type and merger mass ratio, as well as potential evolutionary links between dwarf irregulars, BCDs and compact dwarf irregulars. ### The stellar mass-halo mass relation of isolated field dwarfs: a critical test of $\Lambda$CDM at the edge of galaxy formation We fit the rotation curves of isolated dwarf galaxies to directly measure the stellar mass-halo mass relation ($M_*-M_{200}$) over the mass range $5 \times 10^5 < M_{*}/{\rm M}_\odot < 10^{8}$. By accounting for cusp-core transformations due to stellar feedback, we find a monotonic relation with remarkably little scatter. Such monotonicity implies that abundance matching should yield a similar $M_*-M_{200}$ if the cosmological model is correct. Using the 'field galaxy' stellar mass function from the Sloan Digital Sky Survey (SDSS) and the halo mass function from the $\Lambda$ Cold Dark Matter Bolshoi simulation, we find remarkable agreement between the two. This holds down to $M_{200} \sim 5 \times 10^9$ M$_\odot$, and to $M_{200} \sim 5 \times 10^8$ M$_\odot$ if we assume a power law extrapolation of the SDSS stellar mass function below $M_* \sim 10^7$ M$_\odot$. However, if instead of SDSS we use the stellar mass function of nearby galaxy groups, then the agreement is poor. This occurs because the group stellar mass function is shallower than that of the field below $M_* \sim 10^9$ M$_\odot$, recovering the familiar 'missing satellites' and 'too big to fail' problems. Our result demonstrates that both problems are confined to group environments and must, therefore, owe to 'galaxy formation physics' rather than exotic cosmology. Finally, we repeat our analysis for a $\Lambda$ Warm Dark Matter cosmology, finding that it fails at 68% confidence for a thermal relic mass of $m_{\rm WDM} < 1.25$ keV, and $m_{\rm WDM} < 2$ keV if we use the power law extrapolation of SDSS. We conclude by making a number of predictions for future surveys based on these results. ### SPARC: Mass Models for 175 Disk Galaxies with Spitzer Photometry and Accurate Rotation Curves We introduce SPARC (Spitzer Photometry & Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6 um and high-quality rotation curves from previous HI/Halpha studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (~5 dex), and surface brightnesses (~4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass-HI mass relation and the stellar radius-HI radius relation have significant intrinsic scatter, while the HI mass-radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic-to-observed velocity (Vbar/Vobs) for different characteristic radii and values of the stellar mass-to-light ratio (M/L) at [3.6]. Assuming M/L=0.5 Msun/Lsun (as suggested by stellar population models) we find that (i) the gas fraction linearly correlates with total luminosity, (ii) the transition from star-dominated to gas-dominated galaxies roughly corresponds to the transition from spiral galaxies to dwarf irregulars in line with density wave theory; and (iii) Vbar/Vobs varies with luminosity and surface brightness: high-mass, high-surface-brightness galaxies are nearly maximal, while low-mass, low-surface-brightness galaxies are submaximal. These basic properties are lost for low values of M/L=0.2 Msun/Lsun as suggested by the DiskMass survey. The mean maximum-disk limit in bright galaxies is M/L=0.7 Msun/Lsun at [3.6]. The SPARC data are publicly available and represent an ideal test-bed for models of galaxy formation. ### Self-gravitating fluid systems and galactic dark matter In this work we model galaxy-like structures as self-gravitating fluids, and analyse their properties in the Newtonian framework. For isotropic fluids, we show that this leads to a generalised Hernquist profile that admits flat rotation curves at large radial distances. For two-fluid component models, we show analytically that physicality of the solutions demand that one of the fluids is necessarily exotic, i.e has negative pressure, excepting for the case where the density profile is that of the isothermal sphere. We reconcile this result with a corresponding relativistic analysis. Our work can be applied to cases where the gravitating fluids are interpreted as dark fluids, whose microscopic constituents are dark matter particles, which may accompany or cause gravitational collapse giving birth to galaxy like structures. We elaborate on such collapse processes, which might lead to naked singularities. ### Self-gravitating fluid systems and galactic dark matter [Cross-Listing] In this work we model galaxy-like structures as self-gravitating fluids, and analyse their properties in the Newtonian framework. For isotropic fluids, we show that this leads to a generalised Hernquist profile that admits flat rotation curves at large radial distances. For two-fluid component models, we show analytically that physicality of the solutions demand that one of the fluids is necessarily exotic, i.e has negative pressure, excepting for the case where the density profile is that of the isothermal sphere. We reconcile this result with a corresponding relativistic analysis. Our work can be applied to cases where the gravitating fluids are interpreted as dark fluids, whose microscopic constituents are dark matter particles, which may accompany or cause gravitational collapse giving birth to galaxy like structures. We elaborate on such collapse processes, which might lead to naked singularities. ### A note on the predictability of flat galactic rotation curves Based on an exact solution of the Einstein field equations, it is proposed in this note that the dark-matter hypothesis could have led to the prediction of flat galactic rotation curves long before the discovery thereof by assuming that on large scales the matter in the Universe, including dark matter, is a perfect fluid. ### A note on the predictability of flat galactic rotation curves [Replacement] Based on an exact solution of the Einstein field equations, it is proposed in this note that the dark-matter hypothesis could have led to the prediction of flat galactic rotation curves long before the discovery thereof by assuming that on large scales the matter in the Universe, including dark matter, is a perfect fluid. ### Extended HI disks in nearby spiral galaxies In this short write-up, I will concentrate on a few topics of interest. In the 1970s I found very extended HI disks in galaxies such as NGC 5055 and NGC 2841, out to 2 - 2.5 times the Holmberg radius. Since these galaxies are warped, a "tilted ring model" allows rotation curves to be derived, and evidence for dark matter to be found. The evaluation of the amount of dark matter is hampered by a disk-halo degeneracy, which can possibly be broken by observations of velocity dispersions in both the MgI region and the CaII region. ### Cosmological Simulations of Dwarf Galaxies with Cosmic Ray Feedback We perform zoom-in cosmological simulations of a suite of dwarf galaxies, examining the impact of cosmic-rays generated by supernovae, including the effect of diffusion. We first look at the effect of varying the uncertain cosmic ray parameters by repeatedly simulating a single galaxy. Then we fix the comic ray model and simulate five dwarf systems with virial masses range from 8-30 $\times 10^{10}$ Msun. We find that including cosmic ray feedback (with diffusion) consistently leads to disk dominated systems with relatively flat rotation curves and constant star formation rates. In contrast, our purely thermal feedback case results in a hot stellar system and bursty star formation. The CR simulations very well match the observed baryonic Tully-Fisher relation, but have a lower gas fraction than in real systems. We also find that the dark matter cores of the CR feedback galaxies are cuspy, while the purely thermal feedback case results in a substantial core. ### Testing Feedback-Modified Dark Matter Haloes with Galaxy Rotation Curves: Estimation of Halo Parameters and Consistency with $\Lambda$CDM Cosmological N-body simulations predict dark matter (DM) haloes with steep central cusps (e.g. NFW), which contradicts observations of gas kinematics in low mass galaxies that imply the existence of shallow DM cores. Baryonic processes such as adiabatic contraction and gas outflows can, in principle, alter the initial DM density profile, yet their relative contributions to the halo transformation remain uncertain. Recent high resolution, cosmological hydrodynamic simulations (Di Cintio et al. 2014, DC14) predict that inner density profiles depend systematically on the ratio of stellar to DM mass (M$_*$/M$_{\rm halo}$). Using a Markov Chain Monte Carlo approach, we test the NFW and the M$_*$/M$_{\rm halo}$-dependent DC14 halo models against a sample of 147 galaxy rotation curves from the new SPARC data set. These galaxies all have extended HI rotation curves from radio interferometry as well as accurate stellar mass density profiles from near-infrared photometry. The DC14 halo profile provides markedly better fits to the data than does the NFW profile. Unlike NFW, the DC14 halo parameters found in our rotation curve fits naturally recover both the mass-concentration relation predicted by $\Lambda$CDM and the stellar mass-halo mass relation inferred from abundance matching. Halo profiles modified by baryonic processes are therefore more consistent with expectations from $\Lambda$CDM cosmology and provide better fits to galaxy rotation curves across a wide range of galaxy properties than do halo models which neglect baryonic physics. Our results reconcile observations of galaxies with $\Lambda$CDM expectations, offering a solution to the decade long cusp-core discrepancy. ### Baryonic Distributions in Galaxy Dark Matter Haloes I: New Observations of Neutral and Ionized Gas Kinematics We present a combination of new and archival neutral hydrogen (HI) observations and new ionized gas spectroscopic observations for sixteen galaxies in the statistically representative EDGES kinematic sample. HI rotation curves are derived from new and archival radio synthesis observations from the Very Large Array (VLA) as well as processed data products from the Westerbork Radio Synthesis Telescope (WSRT). The HI rotation curves are supplemented with optical spectroscopic integral field unit (IFU) observations using SparsePak on the WIYN 3.5 m telescope to constrain the central ionized gas kinematics in twelve galaxies. The full rotation curves of each galaxy are decomposed into baryonic and dark matter halo components using 3.6$\mu$m images from the Spitzer Space Telescope for the stellar content, the neutral hydrogen data for the atomic gas component, and, when available, CO data from the literature for the molecular gas component. Differences in the inferred distribution of mass are illustrated under fixed stellar mass-to-light ratio (M/L) and maximum disc/bulge assumptions in the rotation curve decomposition. ### Dynamics of galaxies and clusters in \textit{refracted gravity} We investigate the proof of concept and the implications of \textit{refracted gravity}, a novel modified gravity aimed to solve the discrepancy between the luminous and the dynamical mass of cosmic structures without resorting to dark matter. Inspired by the behavior of electric fields in matter, refracted gravity introduces a gravitational permittivity that depends on the local mass density and modifies the standard Poisson equation. The resulting gravitational field can become more intense than the Newtonian field and can mimic the presence of dark matter. We show that the refracted gravitational field correctly describes (1) the rotation curves and the Tully-Fisher relation of disk galaxies; and (2) the observed temperature profile of the X-ray gas of galaxy clusters. According to these promising results, we conclude that refracted gravity deserves further investigation. ### Lectures on Dark Matter Physics [Cross-Listing] Rotation curve measurements from the 1970s provided the first strong indication that a significant fraction of matter in the Universe is non-baryonic. In the intervening years, a tremendous amount of progress has been made on both the theoretical and experimental fronts in the search for this missing matter, which we now know constitutes nearly 85% of the Universe's matter density. These series of lectures, first given at the TASI 2015 summer school, provide an introduction to the basics of dark matter physics. They are geared for the advanced undergraduate or graduate student interested in pursuing research in high-energy physics. The primary goal is to build an understanding of how observations constrain the assumptions that can be made about the astro- and particle physics properties of dark matter. The lectures begin by delineating the basic assumptions that can be inferred about dark matter from rotation curves. A detailed discussion of thermal dark matter follows, motivating Weakly Interacting Massive Particles, as well as lighter-mass alternatives. As an application of these concepts, the phenomenology of direct and indirect detection experiments is discussed in detail. ### Lectures on Dark Matter Physics Rotation curve measurements from the 1970s provided the first strong indication that a significant fraction of matter in the Universe is non-baryonic. In the intervening years, a tremendous amount of progress has been made on both the theoretical and experimental fronts in the search for this missing matter, which we now know constitutes nearly 85% of the Universe's matter density. These series of lectures, first given at the TASI 2015 summer school, provide an introduction to the basics of dark matter physics. They are geared for the advanced undergraduate or graduate student interested in pursuing research in high-energy physics. The primary goal is to build an understanding of how observations constrain the assumptions that can be made about the astro- and particle physics properties of dark matter. The lectures begin by delineating the basic assumptions that can be inferred about dark matter from rotation curves. A detailed discussion of thermal dark matter follows, motivating Weakly Interacting Massive Particles, as well as lighter-mass alternatives. As an application of these concepts, the phenomenology of direct and indirect detection experiments is discussed in detail. ### Declining rotation curves of galaxies as a test of gravitational theory Unlike Newtonian dynamics which is linear and obeys the strong equivalence principle, in any nonlinear gravitation such as Milgromian dynamics (MOND), the strong version of the equivalence principle is violated and the gravitational dynamics of a system is influenced by the external gravitational field in which it is embedded. This so called External Field Effect (EFE) is one of the important implications of MOND and provides a special context to test Milgromian dynamics. Here, we study the rotation curves (RCs) of 18 spiral galaxies and find that their shapes constrain the EFE. We show that the EFE can successfully remedy the overestimation of rotation velocities in 80\% of the sample galaxies in Milgromian dynamics fits by decreasing the velocity in the outer part of the RCs. We compare the implied external field with the gravitational field for non-negligible nearby sources of each individual galaxy and find that in many cases it is compatible with the EFE within the uncertainties. We therefore argue that in the framework of Milgromian dynamics, one can constrain the gravitational field induced from the environment of galaxies using their RCs. We finally show that taking into account the EFE yields more realistic values for the stellar mass-to-light ratio in terms of stellar population synthesis than the ones implied without the EFE. ### Rotation curve fitting and its fatal attraction to cores in realistically simulated galaxy observations We study the role of systematic effects in observational studies of the core/cusp problem under the minimum disc approximation using a suite of high-resolution (25-pc softening length) hydrodynamical simulations of dwarf galaxies. We mimic kinematical observations in a realistic manner at different distances and inclinations, and fit the resulting rotation curves with two analytical models commonly used to differentiate cores from cusps in the dark matter distribution. We find that the cored pseudo-isothermal sphere (P-ISO) model is often strongly favoured by the reduced $\chi^2_\nu$ of the fits in spite of the fact that our simulations contain cuspy Navarro-Frenk-White profiles (NFW) by construction. We show that even idealized measurements of the gas circular motions can lead to the incorrect answer if pressure support corrections, with a typical size of order ~5 km s$^{-1}$ in the central kiloparsec, are neglected; the results are more misleading for closer galaxies because the inner region, where the effect of pressure support is most significant, is better sampled. They also tend to be worse for highly inclined galaxies as a result of projection effects. Rotation curve fits at 10 Mpc favour the P-ISO model in more than 70% of the cases. At 80 Mpc, between 40% and 78% of the galaxies indicate the fictitious presence of a dark matter core. The coefficients of our best-fit models agree well with those reported in observational studies; therefore, we conclude that NFW haloes can not be ruled out reliably from this type of rotation curve analysis. ### Flat rotation curves and a non-evolving Tully-Fisher relation from KMOS galaxies at z~1 The study of the evolution of star-forming galaxies requires the determination of accurate kinematics and scaling relations out to high redshift. In this paper, we select a sample of 18 galaxies at z~1, observed in the H-alpha emission-line with KMOS, to derive accurate kinematics using a novel 3D analysis technique. We use the new code 3D-Barolo, that models the galaxy emission directly in the 3D observational space, without the need to extract kinematic maps. This technique's major advantage is that it is not affected by beam smearing and thus it enables accurate determination of rotation velocity and internal velocity dispersion, even at low spatial resolution. We find that: 1) the rotation curves of these z~1 galaxies rise very steeply within few kiloparsecs and remain flat out to the outermost radius and 2) the H-alpha velocity dispersions are low, ranging from 15 to 40 km/s, which leads to V/sigma = 3-10. These characteristics are remarkably similar to those of disc galaxies in the local Universe. Finally, we also report no evolution of the Tully-Fisher relation, as our sample lies precisely on the same relation of local spiral galaxies. These findings are more robust than those obtained with previous methods because of our 3D approach. Two-dimensional techniques with partial or absent corrections for beam smearing can systematically lead to the overestimation of velocity dispersions and underestimation of rotation velocities, which result in the inaccurate placement of galaxies in the Tully-Fisher diagram. Our results show that disc galaxies are kinematically mature and rotation-dominated already at z~1. ### Flat rotation curves and a non-evolving Tully-Fisher relation from KMOS galaxies at z~1 [Replacement] The study of the evolution of star-forming galaxies requires the determination of accurate kinematics and scaling relations out to high redshift. In this paper, we select a sample of 18 galaxies at z~1, observed in the H-alpha emission-line with KMOS, to derive accurate kinematics using a novel 3D analysis technique. We use the new code 3D-Barolo, that models the galaxy emission directly in the 3D observational space, without the need to extract kinematic maps. This technique's major advantage is that it is not affected by beam smearing and thus it enables accurate determination of rotation velocity and internal velocity dispersion, even at low spatial resolution. We find that: 1) the rotation curves of these z~1 galaxies rise very steeply within few kiloparsecs and remain flat out to the outermost radius and 2) the H-alpha velocity dispersions are low, ranging from 15 to 40 km/s, which leads to V/sigma = 3-10. These characteristics are remarkably similar to those of disc galaxies in the local Universe. Finally, we also report no evolution of the Tully-Fisher relation, as our sample lies precisely on the same relation of local spiral galaxies. These findings are more robust than those obtained with previous methods because of our 3D approach. Two-dimensional techniques with partial or absent corrections for beam smearing can systematically lead to the overestimation of velocity dispersions and underestimation of rotation velocities, which result in the inaccurate placement of galaxies in the Tully-Fisher diagram. Our results show that disc galaxies are kinematically mature and rotation-dominated already at z~1. ### On possible tachyonic state of neutrino dark matter [Cross-Listing] We revive the historically first dark matter model based on neutrinos, but with an additional assumption that neutrinos might exist in tachyonic almost sterile states. To this end, we propose a group-theoretical algorithm for the description of tachyons. The key point is that we employ a distinct tachyon Lorentz group with new (superluminal) parametrization which does not lead to violation of causality and unitarity. Our dark matter model represents effectively scalar tachyonic neutrino-antineutrino conglomerate. Distributed all over the universe, such fluid behaves as stable isothermal/stiff medium which produces somewhat denser regions (halos') around galaxies and clusters. To avoid the central singularity inherent to the isothermal profile, we apply a special smoothing algorithm which yields density distributions and rotation curves consistent with observational data. ### On possible tachyonic state of neutrino dark matter [Cross-Listing] We revive the historically first dark matter model based on neutrinos, but with an additional assumption that neutrinos might exist in tachyonic almost sterile states. To this end, we propose a group-theoretical algorithm for the description of tachyons. The key point is that we employ a distinct tachyon Lorentz group with new (superluminal) parametrization which does not lead to violation of causality and unitarity. Our dark matter model represents effectively scalar tachyonic neutrino-antineutrino conglomerate. Distributed all over the universe, such fluid behaves as stable isothermal/stiff medium which produces somewhat denser regions (halos') around galaxies and clusters. To avoid the central singularity inherent to the isothermal profile, we apply a special smoothing algorithm which yields density distributions and rotation curves consistent with observational data. ### Halpha Kinematics of S4G Spiral Galaxies - III. Inner rotation curves We present a detailed study of the shape of the innermost part of the rotation curves of a sample of 29 nearby spiral galaxies, based on high angular and spectral resolution kinematic Halpha Fabry-Perot observations. In particular, we quantify the steepness of the rotation curve by measuring its slope dRvc(0). We explore the relationship between the inner slope and several galaxy parameters, such as stellar mass, maximum rotational velocity, central surface brightness ({\mu}0), bar strength and bulge-to-total ratio. Even with our limited dynamical range, we find a trend for low-mass galaxies to exhibit shallower rotation curve inner slopes than high-mass galaxies, whereas steep inner slopes are found exclusively in high-mass galaxies. This trend may arise from the relationship between the total stellar mass and the mass of the bulge, which are correlated among them. We find a correlation between the inner slope of the rotation curve and the morphological T-type, complementary to the scaling relation between dRvc(0) and {\mu}0 previously reported in the literature. Although we find that the inner slope increases with the Fourier amplitude A2 and decreases with the bar torque Qb, this may arise from the presence of the bulge implicit in both A2 and Qb. As previously noted in the literature, the more compact the mass in the central parts of a galaxy (more concretely, the presence of a bulge), the steeper the inner slopes. We conclude that the baryonic matter dominates the dynamics in the central parts of our sample galaxies. ### Dark matter as a condensate: Deduction of microscopic properties In the present work we model dark matter as a Bose-Einstein condensate and the main goal is the deduction of the microscopic properties, namely, mass, number of particles, and scattering length, related to the particles comprised in the corresponding condensate. This task is done introducing in the corresponding model the effects of the thermal cloud of the system. Three physical conditions are imposed, i.e., mechanical equilibrium of the condensate, explanation of the rotation curves of stars belonging to dwarf galaxies, and, finally, the deflection of light due to the presence of dark matter. These three aforementioned expressions allow us to cast the features of the particles in terms of detectable astrophysical variables. Finally, the model is contrasted against observational data and in this manner we obtain values for the involved microscopic parameters of the condensate. The deduced results are compared with previous results in which dark matter has not been considered a condensate. The main conclusion is that they do not coincide. ### Dark matter as a condensate: Deduction of microscopic properties [Replacement] In the present work we model dark matter as a Bose-Einstein condensate and the main goal is the deduction of the microscopic properties, namely, mass, number of particles, and scattering length, related to the particles comprised in the corresponding condensate. This task is done introducing in the corresponding model the effects of the thermal cloud of the system. Three physical conditions are imposed, i.e., mechanical equilibrium of the condensate, explanation of the rotation curves of stars belonging to dwarf galaxies, and, finally, the deflection of light due to the presence of dark matter. These three aforementioned expressions allow us to cast the features of the particles in terms of detectable astrophysical variables. Finally, the model is contrasted against observational data and in this manner we obtain values for the involved microscopic parameters of the condensate. The statistical errors are seven and eighteen percent for the scattering length and mass of the dark matter particle, respectively. ### Modified Dark Matter [Cross-Listing] Modified dark matter (MDM, formerly known as MoNDian dark matter) is a phenomenological model of dark matter, inspired by quantum gravity. We review the construction of MDM by generalizing entropic gravity to de-Sitter space as is appropriate for an accelerating universe (in accordance with the Lambda-CDM model). Unlike cold dark matter models, the MDM mass profile depends on the baryonic mass. We successfully fit the rotation curves to a sample of 30 local spiral galaxies with a single free parameter (viz., the mass-to-light ratio for each galaxy). We show that dynamical and observed masses agree in a sample of 93 galactic clusters. We also comment on strong gravitational lensing in the context of MDM. ### Modified Dark Matter [Cross-Listing] Modified dark matter (MDM, formerly known as MoNDian dark matter) is a phenomenological model of dark matter, inspired by quantum gravity. We review the construction of MDM by generalizing entropic gravity to de-Sitter space as is appropriate for an accelerating universe (in accordance with the Lambda-CDM model). Unlike cold dark matter models, the MDM mass profile depends on the baryonic mass. We successfully fit the rotation curves to a sample of 30 local spiral galaxies with a single free parameter (viz., the mass-to-light ratio for each galaxy). We show that dynamical and observed masses agree in a sample of 93 galactic clusters. We also comment on strong gravitational lensing in the context of MDM. ### Modified Dark Matter Modified dark matter (MDM, formerly known as MoNDian dark matter) is a phenomenological model of dark matter, inspired by quantum gravity. We review the construction of MDM by generalizing entropic gravity to de-Sitter space as is appropriate for an accelerating universe (in accordance with the Lambda-CDM model). Unlike cold dark matter models, the MDM mass profile depends on the baryonic mass. We successfully fit the rotation curves to a sample of 30 local spiral galaxies with a single free parameter (viz., the mass-to-light ratio for each galaxy). We show that dynamical and observed masses agree in a sample of 93 galactic clusters. We also comment on strong gravitational lensing in the context of MDM. ### Tachyonic models of dark matter We consider a spherically symmetric stationary problem in General Relativity, including a black hole, inflow of normal and tachyonic matter and outflow of tachyonic matter. Computations in a weak field limit show that the resulting concentration of matter around the black hole leads to gravitational effects equivalent to those associated with dark matter halo. In particular, the model reproduces asymptotically constant galactic rotation curves, if the tachyonic flows of the central supermassive black hole in the galaxy are considered as a main contribution. ### Tachyonic models of dark matter [Cross-Listing] We consider a spherically symmetric stationary problem in General Relativity, including a black hole, inflow of normal and tachyonic matter and outflow of tachyonic matter. Computations in a weak field limit show that the resulting concentration of matter around the black hole leads to gravitational effects equivalent to those associated with dark matter halo. In particular, the model reproduces asymptotically constant galactic rotation curves, if the tachyonic flows of the central supermassive black hole in the galaxy are considered as a main contribution. ### Asymmetric mass models of disk galaxies - I. Messier 99 Mass models of galactic disks traditionnally rely on axisymmetric density and rotation curves, paradoxically acting as if their most remarkable asymmetric features, like e.g. lopsidedness or spiral arms, were not important. In this article, we relax the axisymmetry approximation and introduce a methodology that derives 3D gravitational potentials of disk-like objects and robustly estimates the impacts of asymmetries on circular velocities in the disk mid-plane. Mass distribution models can then be directly fitted to asymmetric line-of-sight velocity fields. Applied to the grand-design spiral M99, the new strategy shows that circular velocities are highly non-uniform, particularly in the inner disk of the galaxy, as a natural response to the perturbed gravitational potential of luminous matter. A cuspy inner density profile of dark matter is found in M99, in the usual case where luminous and dark matter share the same centre. The impact of the velocity non-uniformity is to make the inner profile less steep, though the density remains cuspy. On another hand, a model where the halo is core-dominated and shifted by 2.2-2.5 kpc from the luminous mass centre is more appropriate to account for most of the kinematical lopsidedness evidenced in the velocity field of M99. However, the gravitational potential of luminous baryons is not asymmetric enough to explain the kinematical lopsidedness of the innermost regions, irrespective of the density shape of dark matter. This discrepancy points out the necessity of an additional dynamical process in these regions, maybe a lopsided distribution of dark matter. ### Asymmetric mass models of disk galaxies - I. Messier 99 [Replacement] Mass models of galactic disks traditionally rely on axisymmetric density and rotation curves, paradoxically acting as if their most remarkable asymmetric features, such as lopsidedness or spiral arms, were not important. In this article, we relax the axisymmetry approximation and introduce a methodology that derives 3D gravitational potentials of disk-like objects and robustly estimates the impacts of asymmetries on circular velocities in the disk midplane. Mass distribution models can then be directly fitted to asymmetric line-of-sight velocity fields. Applied to the grand-design spiral M99, the new strategy shows that circular velocities are highly nonuniform, particularly in the inner disk of the galaxy, as a natural response to the perturbed gravitational potential of luminous matter. A cuspy inner density profile of dark matter is found in M99, in the usual case where luminous and dark matter share the same center. The impact of the velocity nonuniformity is to make the inner profile less steep, although the density remains cuspy. On another hand, a model where the halo is core dominated and shifted by 2.2-2.5 kpc from the luminous mass center is more appropriate to explain most of the kinematical lopsidedness evidenced in the velocity field of M99. However, the gravitational potential of luminous baryons is not asymmetric enough to explain the kinematical lopsidedness of the innermost regions, irrespective of the density shape of dark matter. This discrepancy points out the necessity of an additional dynamical process in these regions: possibly a lopsided distribution of dark matter. ### Scale dynamical origin of modification or addition of potential in mechanics. A possible framework for the MOND theory and the dark matter [Cross-Listing] Using our mathematical framework developed in \cite{cresson-pierret_scale} called \emph{scale dynamics}, we propose in this paper a new way of interpreting the problem of adding or modifying potentials in mechanics and specifically in galactic dynamics. An application is done for the two-body problem with a Keplerian potential showing that the velocity of the orbiting body is constant. This would explain the observed phenomenon in the flat rotation curves of galaxies without adding \emph{dark matter} or modifying Newton's law of dynamics. ### Dark Matter in a single-metric universe [Replacement] A few years ago Baker proposed a metric, implementing the Bona-Stela construction, which interpolates smoothly between the Schwarzschild metric at small scales and the Friedmann-Robertson-Walker (FRW) metric at large scales. As it stands, by enforcing a homogeneous isotropic stress energy tensor the predictions are incompatible with solar system data. We show that permitting small radial inhomogeneity and anisotropy avoids the problem while introducing an effective dark matter (eDM) term that can go some way to explain flattened galactic rotation curves, the growth rate of the baryonic matter density perturbation and the enhancement of the higher CMB acoustic peak anisotropies. ### The Impact of Molecular Gas on Mass Models of Nearby Galaxies We present CO velocity fields and rotation curves for a sample of nearby galaxies, based on data from the HERACLES survey. We combine our data with literature THINGS, SINGS and KINGFISH results to provide a comprehensive sample of mass models of disk galaxies inclusive of molecular gas. We compare the kinematics of the molecular (CO from HERACLES) and atomic (${\rm H{\scriptstyle I}}$ from THINGS) gas distributions to determine the extent to which CO may be used to probe the dynamics in the inner part of galaxies. In general, we find good agreement between the CO and ${\rm H{\scriptstyle I}}$ kinematics with small differences in the inner part of some galaxies. We add the contribution of the molecular gas to the mass models in our galaxies by using two different conversion factors $\mathrm{\alpha_{CO}}$ to convert CO luminosity to molecular gas mass surface density - the constant Milky Way value and the radially varying profiles determined in recent work based on THINGS, HERACLES and KINGFISH data. We study the relative effect that the addition of the molecular gas has upon the halo rotation curves for Navarro-Frenk-White (NFW) and the observationally motivated pseudo-isothermal halos. The contribution of the molecular gas varies for galaxies in our sample - for those galaxies where there is a substantial molecular gas content, using different values of $\mathrm{\alpha_{CO}}$ can result in significant differences to the relative contribution of the molecular gas and, hence, the shape of the dark matter halo rotation curves in the central regions of galaxies. ### Three-Dimensional Distribution of the ISM in the Milky Way Galaxy: III. The Total Neutral Gas Disk We present newly obtained three-dimensional gaseous maps of the Milky Way Galaxy; HI, H$_2$ and total-gas (HI plus H$_2$) maps, which were derived from the HI and $^{12}$CO($J=1$--0) survey data and rotation curves based on the kinematic distance. The HI and H$_2$ face-on maps show that the HI disk is extended to the radius of 15--20 kpc and its outskirt is asymmetric to the Galactic center, while most of the H$_2$ gas is distributed inside the solar circle. The total gas mass within radius 30 kpc amounts to $8.0\times 10^9$ M$_\odot$, 89\% and 11\% of which are HI and H$_2$, {respectively}. The vertical slices show that the outer HI disk is strongly warped and the inner HI and H$_2$ disks are corrugated. The total gas map is advantageous to trace spiral structure from the inner to outer disk. Spiral structures such as the Norma-Cygnus, the Perseus, the Sagittarius-Carina, the Scutum-Crux, and the Orion arms are more clearly traced in the total gas map than ever. All the spiral arms are well explained with logarithmic spiral arms with pitch angle of $11\degree$ -- $15\degree$. The molecular fraction to the total gas is high near the Galactic center and decreases with the Galactocentric distance. The molecular fraction also locally enhanced at the spiral arms compared with the inter-arm regions. ### Probing noncommutativity with astrophysical data [Replacement] It is well known that noncommutativity is commonly used in theories of grand unifications like strings or loops, however its consequences in standard astrophysics it is not well understood. For those reasons, this paper is devoted to study the astrophysical consequences of noncommutativity, focusing in stellar dynamics and rotational curves of galaxies. We start exploring stars with incompressible and polytropic fluids respectively, with the addition of a noncommutative matter. In both cases, we propose an appropriate constriction based in the difference between a traditional and an anomalous behavior. As a complement, we explore the rotation curves of galaxies assuming that the dark matter halo is a noncommutative fluid, obtaining a value of the free parameter through the analysis of twelve LSB galaxies; in this sense our results are compared with traditional models like Pseudoisothermal, Navarro-Frenk-White and Burkert. ### Kinematics of dwarf galaxies in gas-rich groups, and the survival and detectability of tidal dwarf galaxies We present DEIMOS multi-object spectroscopy (MOS) of 22 star-forming dwarf galaxies located in four gas-rich groups, including six newly-discovered dwarfs. Two of the galaxies are strong tidal dwarf galaxy (TDG) candidates based on our luminosity-metallicity relation definition. We model the rotation curves of these galaxies. Our sample shows low mass-to-light ratios (M/L=0.73$\pm0.39M_\odot/L_\odot$) as expected for young, star-forming dwarfs. One of the galaxies in our sample has an apparently strongly-falling rotation curve, reaching zero rotational velocity outside the turnover radius of $r_{turn}=1.2r_e$. This may be 1) a polar ring galaxy, with a tilted bar within a face-on disk; 2) a kinematic warp. These scenarios are indistinguishable with our current data due to limitations of slit alignment inherent to MOS-mode observations. We consider whether TDGs can be detected based on their tidal radius, beyond which tidal stripping removes kinematic tracers such as H$\alpha$ emission. When the tidal radius is less than about twice the turnover radius, the expected falling rotation curve cannot be reliably measured. This is problematic for as much as half of our sample, and indeed more generally, galaxies in groups like these. Further to this, the H$\alpha$ light that remains must be sufficiently bright to be detected; this is only the case for three (14%) galaxies in our sample. We conclude that the falling rotation curves expected of tidal dwarf galaxies are intrinsically difficult to detect. ### Rotation Curve Decomposition for Size-Mass Relations of Bulge, Disk, and Dark Halo in Spiral Galaxies Rotation curves of more than one hundred spiral galaxies were compiled from the literature, and deconvolved into bulge, disk, and dark halo using $\chi^2$ fitting in order to determine their scale radii and masses. Correlation analyses were obtained of the fitting parameters for galaxies that satisfied selection and accuracy criteria. Size-mass relations indicate that the sizes and masses are positively correlated among different components in such a way that the larger or more massive is the dark halo, the larger or more massive are the disk and bulge. Empirical size-mass relations were obtained for bulge, disk and dark halo by the least-squares fitting. The disk-to-halo mass ratio was found to be systematically greater by a factor of three than that predicted by cosmological simulations combined with photometry. A preliminary mass function for dark halo was obtained, which is represented by the Schechter function followed by a power law. ### The Case Against Dark Matter and Modified Gravity: Flat Rotation Curves Are a Rigorous Requirement in Rotating Self-Gravitating Newtonian Gaseous Disks By solving analytically the various types of Lane-Emden equations with rotation, we have discovered two new coupled fundamental properties of rotating, self-gravitating, gaseous disks in equilibrium: Isothermal disks must, on average, exhibit strict power-law density profiles in radius $x$ on their equatorial planes of the form $A x^{k-1}$, where $A$ and $k-1$ are the integration constants; and flat'' rotation curves precisely such as those observed in spiral galaxy disks. Polytropic disks must, on average, exhibit strict density profiles of the form $\left[\ln(A x^k)\right]^n$, where $n$ is the polytropic index; and flat'' rotation curves described by square roots of upper incomplete gamma functions. By on average,'' we mean that, irrespective of the chosen boundary conditions, the actual profiles must oscillate around and remain close to the strict mean profiles of the analytic singular equilibrium solutions. We call such singular solutions the intrinsic'' solutions of the differential equations because they are demanded by the second-order equations themselves with no regard to the Cauchy problem. The results are directly applicable to gaseous galaxy disks that have long been known to be isothermal and to protoplanetary disks during the extended isothermal and adiabatic phases of their evolution. In galactic gas dynamics, they have the potential to resolve the dark matter--modified gravity controversy in a sweeping manner, as they render both of these hypotheses unnecessary. In protoplanetary disk research, they provide observers with powerful new probing tool, as they predict a clear and simple connection between the radial density profiles and the rotation curves of self-gravitating disks in their very early (pre-Class 0 and perhaps the youngest Class Young Stellar Objects) phases of evolution. ### The Case Against Dark Matter and Modified Gravity: Flat Rotation Curves Are a Rigorous Requirement in Rotating Self-Gravitating Newtonian Gaseous Disks [Replacement] By solving analytically the various types of Lane-Emden equations with rotation, we have discovered two new coupled fundamental properties of rotating, self-gravitating, gaseous disks in equilibrium: Isothermal disks must, on average, exhibit strict power-law density profiles in radius $x$ on their equatorial planes of the form $A x^{k-1}$, where $A$ and $k-1$ are the integration constants, and "flat" rotation curves precisely such as those observed in spiral galaxy disks. Polytropic disks must, on average, exhibit strict density profiles of the form $\left[\ln(A x^k)\right]^n$, where $n$ is the polytropic index, and "flat" rotation curves described by square roots of upper incomplete gamma functions. By "on average," we mean that, irrespective of the chosen boundary conditions, the actual profiles must oscillate around and remain close to the strict mean profiles of the analytic singular equilibrium solutions. We call such singular solutions the "intrinsic" solutions of the differential equations because they are demanded by the second-order equations themselves with no regard to the Cauchy problem. The results are directly applicable to gaseous galaxy disks that have long been known to be isothermal and to protoplanetary disks during the extended isothermal and adiabatic phases of their evolution. In galactic gas dynamics, they have the potential to resolve the dark matter--modified gravity controversy in a sweeping manner, as they render both of these hypotheses unnecessary. In protoplanetary disk research, they provide observers with powerful new probing tool, as they predict a clear and simple connection between the radial density profiles and the rotation curves of self-gravitating disks in their very early (pre-Class 0 and perhaps the youngest Class Young Stellar Objects) phases of evolution. ### Static spherically symmetric solutions in mimetic gravity: rotation curves & wormholes [Replacement] In this work, we analyse static spherically symmetric solutions in the framework of mimetic gravity, an extension of general relativity where the conformal degree of freedom of gravity is isolated in a covariant fashion. Here we extend previous works by considering in addition a potential for the mimetic field. An appropriate choice of such potential allows for the reconstruction of a number of interesting cosmological and astrophysical scenarios. We explicitly show how to reconstruct such a potential for a general static spherically symmetric space-time. A number of applications and scenarios are then explored, among which traversable wormholes. Finally, we analytically reconstruct potentials which leads to solutions to the equations of motion featuring polynomial corrections to the Schwarzschild spacetime. Accurate choices for such corrections could provide an explanation for the inferred flat rotation curves of spiral galaxies within the mimetic gravity framework, without the need for particle dark matter. ### Static spherically symmetric solutions in mimetic gravity: rotation curves & wormholes [Replacement] In this work, we analyse static spherically symmetric solutions in the framework of mimetic gravity, an extension of general relativity where the conformal degree of freedom of gravity is isolated in a covariant fashion. Here we extend previous works by considering in addition a potential for the mimetic field. An appropriate choice of such potential allows for the reconstruction of a number of interesting cosmological and astrophysical scenarios. We explicitly show how to reconstruct such a potential for a general static spherically symmetric space-time. A number of applications and scenarios are then explored, among which traversable wormholes. Finally, we analytically reconstruct potentials which leads to solutions to the equations of motion featuring polynomial corrections to the Schwarzschild spacetime. Accurate choices for such corrections could provide an explanation for the inferred flat rotation curves of spiral galaxies within the mimetic gravity framework, without the need for particle dark matter. ### Static spherically symmetric solutions in mimetic gravity: rotation curves & wormholes [Replacement] In this work, we analyse static spherically symmetric solutions in the framework of mimetic gravity, an extension of general relativity where the conformal degree of freedom of gravity is isolated in a covariant fashion. Here we extend previous works by considering in addition a potential for the mimetic field. An appropriate choice of such potential allows for the reconstruction of a number of interesting cosmological and astrophysical scenarios. We explicitly show how to reconstruct such a potential for a general static spherically symmetric space-time. A number of applications and scenarios are then explored, among which traversable wormholes. Finally, we analytically reconstruct potentials which leads to solutions to the equations of motion featuring polynomial corrections to the Schwarzschild spacetime. Accurate choices for such corrections could provide an explanation for the inferred flat rotation curves of spiral galaxies within the mimetic gravity framework, without the need for particle dark matter. ### Static spherically symmetric solutions in mimetic gravity: rotation curves & wormholes [Replacement] In this work, we analyse static spherically symmetric solutions in the framework of mimetic gravity, an extension of general relativity where the conformal degree of freedom of gravity is isolated in a covariant fashion. Here we extend previous works by considering in addition a potential for the mimetic field. An appropriate choice of such potential allows for the reconstruction of a number of interesting cosmological and astrophysical scenarios. We explicitly show how to reconstruct such a potential for a general static spherically symmetric space-time. A number of applications and scenarios are then explored, among which traversable wormholes. Finally, we analytically reconstruct potentials which leads to solutions to the equations of motion featuring polynomial corrections to the Schwarzschild spacetime. Accurate choices for such corrections could provide an explanation for the inferred flat rotation curves of spiral galaxies within the mimetic gravity framework, without the need for particle dark matter. ### Combined Solar System and rotation curve constraints on MOND The Modified Newtonian Dynamics (MOND) paradigm generically predicts that the external gravitational field in which a system is embedded can produce effects on its internal dynamics. In this communication, we first show that this External Field Effect can significantly improve some galactic rotation curves fits by decreasing the predicted velocities of the external part of the rotation curves. In modified gravity versions of MOND, this External Field Effect also appears in the Solar System and leads to a very good way to constrain the transition function of the theory. A combined analysis of the galactic rotation curves and Solar System constraints (provided by the Cassini spacecraft) rules out several classes of popular MOND transition functions, but leaves others viable. Moreover, we show that LISA Pathfinder will not be able to improve the current constraints on these still viable transition functions. ### Galactic mapping with general relativity and the observed rotation curves Typically, stars in galaxies have higher velocities than predicted by Newtonian gravity in conjunction with observable galactic matter. To account for the phenomenon, some researchers modified Newtonian gravitation; others introduced dark matter in the context of Newtonian gravity. We employed general relativity successfully to describe the galactic velocity profiles of four galaxies: NGC 2403, NGC 2903, NGC 5055 and the Milky Way. Here we map the density contours of the galaxies, achieving good concordance with observational data. In our Solar neighbourhood, we found a mass density and density fall-off fitting observational data satisfactorily. From our GR results, using the threshold density related to the observed optical zone of a galaxy, we had found that the Milky Way was indicated to be considerably larger than had been believed to be the case. To our knowledge, this was the only such existing theoretical prediction ever presented. Very recent observational results by Xu et al. have confirmed our prediction. As in our previous studies, galactic masses are consistently seen to be higher than the baryonic mass determined from observations but still notably lower than those deduced from the approaches relying upon dark matter in a Newtonian context. In this work, we calculate the non-luminous fraction of matter for our sample of galaxies that is derived from applying general relativity to the dynamics of the galaxies. The evidence points to general relativity playing a key role in the explanation of the stars' high velocities in galaxies. Mapping galactic density contours directly from the dynamics opens a new window for predicting galactic structure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174865484237671, "perplexity": 1140.5892019595776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827781.76/warc/CC-MAIN-20160723071027-00163-ip-10-185-27-174.ec2.internal.warc.gz"}
http://export.arxiv.org/abs/1904.09496
cs.DC (what is this?) # Title: Optimal Load Allocation for Coded Distributed Computation in Heterogeneous Clusters Abstract: Recently, coding has been a useful technique to mitigate the effect of stragglers in distributed computing. However, coding in this context has been mainly explored under the assumption of homogeneous workers, although the real-world computing clusters can be often composed of heterogeneous workers that have different computing capabilities. The uniform load allocation without the awareness of heterogeneity possibly causes a significant loss in latency. In this paper, we suggest the optimal load allocation for coded distributed computing with heterogeneous workers. Specifically, we focus on the scenario that there exist workers having the same computing capability, which can be regarded as a group for analysis. We rely on the lower bound on the expected latency and obtain the optimal load allocation by showing that our proposed load allocation achieves the minimum of the lower bound for a sufficiently large number of workers. From numerical simulations, when assuming the group heterogeneity, our load allocation reduces the expected latency by orders of magnitude over the existing load allocation scheme. Subjects: Distributed, Parallel, and Cluster Computing (cs.DC) Cite as: arXiv:1904.09496 [cs.DC] (or arXiv:1904.09496v1 [cs.DC] for this version) ## Submission history From: DaeJin Kim [view email] [v1] Sat, 20 Apr 2019 20:31:25 GMT (883kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480494856834412, "perplexity": 1431.014014459156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00433.warc.gz"}
http://mathhelpforum.com/calculus/143701-parametric-centroid-problem-print.html
# Parametric Centroid Problem • May 8th 2010, 11:24 AM rcyoung3 Parametric Centroid Problem So one of the things I have to know for my Calc final is how do find the center of mass of a centroid given its parametric formulas. We learned how to find centers of mass and how to use parametric equations but never put the two together in class. So how would I go about solving a problem like this: Find the y-coordinate of the centroid oft he curve given by the parametric equations: x=(3^(1/2))t^2 y=t-t^3 t= [0, 1] I can figure out most of it, I'm just stuck trying to figure out how I would find the area of the centroid Thanks :) • May 8th 2010, 05:41 PM AllanCuz Quote: Originally Posted by rcyoung3 So one of the things I have to know for my Calc final is how do find the center of mass of a centroid given its parametric formulas. We learned how to find centers of mass and how to use parametric equations but never put the two together in class. So how would I go about solving a problem like this: Find the y-coordinate of the centroid oft he curve given by the parametric equations: x=(3^(1/2))t^2 y=t-t^3 t= [0, 1] I can figure out most of it, I'm just stuck trying to figure out how I would find the area of the centroid Thanks :) The basic form of C.O.M is $\bar x = \frac{ \iiint x * (density) dx }{ \iiint (density) dx }$ The same goes for y and z bar. Nothing changes in parametric form, we simply compute the integral. In this case, $\bar x = \frac { \int_0^1 3^{1/2} t^2 * density dt } {mass}$ $\bar y = \frac { \int_0^1 (t-t^3)*density dt } {mass}$ I would like to point out that you cannot find the C.O.M without knowing the mass or the density of the shape. But the above is how you calculate the required components. The numerators are of course Mx=0 and My=0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202128052711487, "perplexity": 269.33745359701607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
https://swmath.org/software/8510
# CESAR Specification and verification of concurrent systems in CESAR. The aim of this paper is to illustrate by an example, the alternating bit protocol, the use of CESAR, an interactive system for aiding the design of distributed applications. CESAR allows the progressive validation of the algorithmic description of a system of communicating sequential processes with respect to a given set of specifications. The algorithmic description is done in a high level language inspired from CSP and specifications are a set of formulas of a branching time logic, the temporal operators of which can be computed iteratively as fixed points of monotonic predicate transformers. The verification of a system consists in obtaining by automatic translation of its description program an Interpreted Petri Net representing it and evaluating each formula of the specifications. ## Keywords for this software Anything in here will be replaced on browsers that support the canvas element ## References in zbMATH (referenced in 161 articles ) Showing results 1 to 20 of 161. Sorted by year (citations)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8435075879096985, "perplexity": 1118.1154873934884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00487.warc.gz"}
https://electronics.stackexchange.com/questions/418549/plot-antenna-radiation-pattern-in-matlab
# Plot antenna radiation pattern in Matlab I have the antenna radiation pattern exported from CST. The file is txt file format and contains the angles theta and phi and the directivity. First theta varies between 0 and 180° and phi is 0° and all the directivity values are listed. Then phi is incremented and again theta varies between 0 and 180° and so on, scaning phi in 360°. I would like to plot this data in Matlab and to obtain the pattern from CST. I have tried multiple methods but unfortunately nothing works, unless I create my own antenna element in matlab and simulate it there. However, this is not a solution because I cannot customize as I wish, the antenna type. So, the question is how to create a 3d polar plot in matlab, based on the data from CST? Thank you!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077128529548645, "perplexity": 626.0119575242069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998923.96/warc/CC-MAIN-20190619063711-20190619085711-00172.warc.gz"}
https://www.physicsforums.com/threads/electron-as-a-wave-some-doubts.532676/
# Homework Help: Electron as a wave - some doubts 1. Sep 22, 2011 ### logearav electron as a wave -- some doubts 1. The problem statement, all variables and given/known data While dealing with deBroglie's idea, my book mentions these points 1) The electron is a wave whose length decreases with energy 2) If you try to squash an electron wave closer to the nucleus, the wavelength must get smaller 3) When its wavelength is as small as a nucleus, its energy becomes so great that the attractive force of the nucleus isn't big enough to keep it there 2. Relevant equations 3. The attempt at a solution From the first point, i infer that the wavelength decreases if frequency increases. But i can't understand the rest of the points. Members can help in this regard. 2. Sep 23, 2011 ### Spinnor 3. Sep 27, 2011 ### logearav Re: electron as a wave -- some doubts Thanks Spinnor.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732025623321533, "perplexity": 1528.0335641631264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741491.47/warc/CC-MAIN-20181113194622-20181113220622-00344.warc.gz"}
https://www.physicsforums.com/threads/double-slit-problem-help.128516/
Double Slit Problem Help 1. Aug 9, 2006 MD2000 In a Young's double-slit experiment, the seventh dark fringe is located 0.023 m to the side of the central bright finge on a flat screen, which is 1.1 m away from the slits. The separation between the slits is 1.5 10-4 m. What is the wavelength of the light being used? m I'm having some trouble understanding what exactly the equation is saying. I know I have to use sin theta = mlambda/d for the bright fringe and then sin theta = (m+1/2)lambda/d for the dark..but I'm not sure how to get theta..I figure I need to use the two initial values that I'm given but I'm not sure what those values exactly mean..any help? 2. Aug 9, 2006 Andrew Mason Use: $$\lambda = \frac{d\sin\theta}{(m+1/2)}$$ where $\sin\theta = .023/1.1$. AM 3. Aug 10, 2006 sdekivit draw yourself a triangle and note that $$sin \theta = \frac {d_{minumum-central finge}} {d_{slit-screen}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9323520064353943, "perplexity": 685.2493936775898}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00152-ip-10-171-6-4.ec2.internal.warc.gz"}
http://alekskleyn.blogspot.com/
## Wednesday, May 15, 2013 ### Roots of Noncommutative Polynomial When we study Riemann geometry, there is one interesting object there. Its name is geodesic line. It has two different definitions which ultimately lead to the same object. Geodesic is line of extreme length. Geodesic is such line that tangent vector parallel transfers along it. The reason for this identity is relation between metric tensor and connection. Namely, covariant derivative of metric is 0. As soon this relation is broken in metric affine manifold, we have two different lines. I expect similar phenomenon in algebra of polynomials. When we consider polynomial over commutative ring (usualy this is a field) then we have two definition of root. $$x=x_0$$ is root of polynomial $$p(x)$$ if $$p(x_0)=0$$. $$x=x_0$$ is root of polynomial $$p(x)$$ if $$x-x_0$$ is divisor of polynomial $$p(x)$$. However do these definition lead to the same set when I consider polynomial over noncommutative algebra? The case with polynomial over quaternion algebra $p(x)=(x-i)(x-j)+(x-j) (x-k)$ is relatively simple. Clearly, that $$p(j)=0$$. This polynomial can be presented in the form $p(x)=((x-i)\otimes 1+1\otimes (x-k))\circ(x-j)$ However it does not mean that polynomial $$x-j$$ is divisor of polynomial $$p(x)$$. By the definition, polynomial $$x-j$$ is divisor of polynomial $$p(x)$$ if we can write $p(x)=q(x)(x-j)r(x)=(q(x)\otimes r(x))\circ (x-j)$ Here either $$q(x)$$ has power $$1$$ and $$r(x)$$ is scalar, either $$q(x)$$ is scalar and $$r(x)$$ has power $$1$$. But I do not think that there exist polynomials $$q(x)$$ and $$r(x)$$ such that $(x-i)\otimes 1+1\otimes (x-k)=q(x)\otimes r(x)$ From this example I see that if polynomial $$x-j$$ is divisor of polynomial $$p(x)$$, then $$x=j$$ is root of polynomial $$p(x)$$. However oposite statement can be wrong. Now I want to consider the polynomial $\begin{split} p(x)&=(x-k)(x-i) (x-j) +(x-i) (x-j) (x-k) \\ &+(x-k) (x-j)+(x-j) (x-k) \end{split}$ I see that $$p(j)=0$$ as well $$p(k)=0$$. However I see now the problem. Namely. $\begin{split} p(x)&=((x-k)(x-i)\otimes 1+ (x-i) \otimes (x-k) \\ &+(x-k)\otimes 1+1\otimes (x-k))\circ(x-j) \end{split}$ $\begin{split} p(x)=&(1\otimes(x-i) (x-j) +(x-i) (x-j)\otimes 1 \\ &+1\otimes (x-j)+(x-j)\otimes 1)\circ (x-k) \end{split}$ I can do something more with these equations. For instance $\begin{split} p(x)&=(1\otimes(((x-i)\otimes 1)\circ (x-j)) +(x-i) (x-j)\otimes 1 \\ &+1\otimes (x-j)+(x-j)\otimes 1)\circ (x-k) \end{split}$ ## Saturday, August 18, 2012 ### Research is over. Long live research Recently CERN reported discovery of Higgs boson. Since Higgs boson is responsible for inertial mass, this discovery raises question how equality of inertial and gravitational masses works. In report there was note that there was also track of particle of spin 2. Is it possible that graviton was also discovered in CERN? Is it possible that interaction of graviton and Higgs boson was discovered at CERN as well? If such interaction exists, then it can be possible that at certain condition this interaction can be broken. The question relates to interface of general relativity and quantum mechanics. So string theory and loop gravity should answer these questions. Even these theories have deal with extreme events and it is hard to test them, I believe that in near future we will see the possibility to test these theories. ## Friday, June 15, 2012 ### System of Linear Equations I published my new book Linear Algebra over Division Ring: System of Linear Equations You can find this book in amazon.com and in CreateSpace store ## Friday, July 8, 2011 ### Linear mapping of quaternion algebra Linear automorphism of quaternion algebra is a liner over real field mapping $$f:H\rightarrow H$$ such that $$f(ab)=f(a)f(b)$$ Evidently, linear mapping $$E(x)=x$$ is linear automorphism. In quaternion algebra there are nontrivial linear automorphisms. For instance E1(x)=x0 +x2i +x3j +x1k E3(x)=x0 +x2i +x1j -x3k where x=x0 +x1i +x2j +x3k Similarly, the mapping I(x)=x0 -x1i -x2j -x3k is antilinear automorphism because $$I(ab)=I(b)I(a)$$ In the paper eprint arXiv:1107.1139, Linear Mappings of Quaternion Algebra, I proved following statement. For any linear over real field function $f$ there is unique expansion $$f(x)=a_0E(x)+a_1E_1(x)+a_2E_2(x)+a_3I(x)$$ ## Friday, January 21, 2011 ### Representation of Universal Algebra I published my new book: Representation Theory: Representation of Universal Algebra. In this book I consider morphism of representation, consept of generating set and basis of representation. This allows me to consider basis manifold of representation, active and passive transformations, concept of geometrical object in representation of universal algebra. Similar way I consider tower of representations. ## Sunday, September 26, 2010 ### My first published book This year I published my first book Linear Mappings of Free Algebra: First Steps in Noncommutative Linear Algebra Book was published by Lambert Academic Publishing. ## Tuesday, March 9, 2010 Recently playing with complex numbers I discovered that transformation caused by matrix $$\left( \begin{array}{cc} a_0 & -a_1 \\ a_1 & a_0 \end{array} \right)$$ is equivalent to multiplication over complex number $z=a_0+a_1i$. The Caushy-Riemann equation follows from this matrix. Initially I assumed similar transformation for quaternion. However appropriate matrix for quaternion looks too restrictive. I assumed less restrictive condition for derivative of quaternion function $$\frac {\partial y^0} {\partial x^0} = \frac{\partial y^1}{\partial x^1} = \frac{\partial y^2}{\partial x^2} = \frac{\partial y^3}{\partial x^3}$$ $\frac{\partial y^i}{\partial x^j} =- \frac{\partial y^j}{\partial x^i} \quad i\nej$ I see that such functions like $y=ax$, $y=xa$, $y=x^2$ satisfy to this equations. The same time conjugation does not satisfy it. A lot of questions should be answered to understand if this is set of function that generalize set of complex functions in case of quaternions. You can find details in my paper eprint arXiv:0909.0855 Quaternion Rhapsody, 2010
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494214057922363, "perplexity": 1183.2176262837172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774894.154/warc/CC-MAIN-20141217075254-00082-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=278717
# Moment of inertia of a triangle Tags: inertia, moment, triangle P: 15 1. The problem statement, all variables and given/known data Calculate the moment of inertia of a uniform triangular lamina of mass m in the shape of an isosceles triangle with base 2b and height h, about its axis of symmetry. 3. The attempt at a solution I've tried various things for this and never get the correct answer, 1/2*m*b^2. I'm beginning to think this may involve a double integral. Thanks. HW Helper P: 5,346 I don't think ½mb² is the right result. Here is a similar example I did earlier: http://www.physicsforums.com/showthread.php?t=278184 In this case I think you would attack the sum of the x²*dm by observing that you can construct m in terms of x as something like h*(1-x/b) so that you arrive at an integral over an expression something like (hx² -x³/b)*dx. At the end you will be able note that the area of the lamina triangle times the implied density ρ yields you an M total mass in the product that defines your moment. P: 619 I have coded this problem as a double integral in Maple. > x(y):=b*(1-y/h); > rho:=M/(b*h); > dJ:=int(rho*z^2,z=0..x(y)); > J:=2*int(dJ,y=0..h); In the first line, the right boundary is defined. In the second line, the mass density is expressed. In the third line, the integration in the x-direction is performed from the axis of symmetry to the right edge In the fourth line, the integration is performed in the y-direction from bottom to top. The result is M*b^2/6. It is reasonable that h should not be in the result. The altitude should not affect this function, only the base width which describes how far the mass is distributed off the axis of rotation. HW Helper P: 5,346 ## Moment of inertia of a triangle Happily algebraic methods arrive at the same result. P: 15 I did actually mean to put mb^2 / 6 in my first post. Thanks for replies. Last night I managed to get it myself aswell after spotting errors in my work. Thanks. P: 10 how about the inertia product of this problem? Related Discussions Classical Physics 11 Introductory Physics Homework 2 Introductory Physics Homework 5 Classical Physics 1 Introductory Physics Homework 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343382120132446, "perplexity": 454.58062249954673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/force-between-2-parallel-magnetic-dipole-moments.894367/
# Homework Help: Force between 2 parallel magnetic dipole moments Tags: 1. Nov 22, 2016 ### 1v1Dota2RightMeow 1. The problem statement, all variables and given/known data Find the force of attraction between 2 magnetic dipoles a distance r apart. Both dipoles point to the right. 2. Relevant equations 3. The attempt at a solution All I need help with is figuring out how to determine if the force is attractive or repulsive between the 2 dipole moments. From the question, it seems as though I can conclude that 2 magnetic dipoles pointing in the same direction attract each other. But I need a more fundamental way to figure this out. If I'm given 2 dipoles a distance r apart (where r is not huge) and with some orientation (relative to each other), how do I determine whether there is an attractive force or a repulsive force? 2. Nov 22, 2016 Hello again. $U=- m \cdot B$ where $B$ is the field from the other dipole (magnetic moment). $F=- \nabla U$. (One thing that isn't completely clear from the statement of the problem=Presumably the dipoles are pointing along the x-axis and are a distance r apart on the x-axis.) $\\$ The magnetic field from both magnetic moments points from left to right (surrounding the magnetic moment), and both magnetic moments will thereby be aligned with the field from the other magnetic moment, making the energy negative for each. The energy becomes even more negative if the dipoles get closer together because the field that it feels from the other dipole will be stronger. The system will tend to go to the state of lower energy=thereby the force is attractive. (It should be noted the reason $U=-m \cdot B$ (with a $cos(\theta)$) is because the torque $\tau=m \times B$ (with a $sin(\theta))$ and $U=\int \tau \, d \theta$.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911844491958618, "perplexity": 287.79379586562374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00504.warc.gz"}
https://thiscondensedlife.wordpress.com/category/statistical-mechanics/
# Category Archives: Statistical Mechanics ## Response and Dissipation: Part 1 of the Fluctuation-Dissipation Theorem I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property). In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts. Consider first a damped driven harmonic oscillator with the following equation (for consistency, I’ll use the conventions from my previous post about the phase change after a resonance): $\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}$ One way to solve this equation is to assume that the displacement, $x(t)$, responds linearly to the applied force, $F(t)$ in the following way: $x(t) = \int_{-\infty}^{\infty} \chi(t-t')F(t') dt'$ Just in case this equation doesn’t make sense to you, you may want to reference this post about linear response.  In the Fourier domain, this equation can be written as: $\hat{x}{}(\omega) = \hat{\chi}(\omega) \hat{F}(\omega)$ and one can solve this equation (as done in a previous post) to give: $\hat{\chi}(\omega) = (-\omega^2 + i\omega b + \omega_0^2 )^{-1}$ It is useful to think about the response function, $\chi$, as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation: $\chi(t-t') = \delta x(t)/\delta F(t')$ Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance. Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so: $\Delta E \sim \int \dot{x}F(t) dt = \int d\omega d\omega'dt (-i\omega) \hat{\chi}(\omega) \hat{F}(\omega)\hat{F}(\omega') e^{i(\omega+\omega')t}$ One can write this more simply as: $\Delta E \sim \int d\omega (-i\omega) \hat{\chi}(\omega) |\hat{F}(\omega)|^2$ Noticing that the energy dissipated has to be a real function, and that $|\hat{F}(\omega)|^2$ is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write: $\Delta E \sim \int d \omega \omega\hat{\chi}''(\omega)|\hat{F}(\omega)|^2$ Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation. Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a $\pi/2$ phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. $\chi = \chi' +i \chi'' = \chi' +e^{i\pi/2}\chi''$). One can see from the plot from below that damping (i.e. dissipation) is quantified by a $\pi/2$ phase lag. Damping is usually associated with a 90 degree phase lag Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations! ## Consistency in the Hierarchy When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them. Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics. While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency. While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)). This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm). In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions. Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale. What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them. ## Bohr-van Leeuwen Theorem and Micro/Macro Disconnect A couple weeks ago, I wrote a post about the Gibbs paradox and how it represented a case where, if particle indistinguishability was not taken into account, led to some bizarre consequences on the macroscopic scale. In particular, it suggested that entropy should increase when partitioning a monatomic gas into two volumes. This paradox therefore contained within it the seeds of quantum mechanics (through particle indistinguishability), unbeknownst to Gibbs and his contemporaries. Another historic case where a logical disconnect between the micro- and macroscale arose was in the context of the Bohr-van Leeuwen theorem. Colloquially, the theorem says that magnetism of any form (ferro-, dia-, paramagnetism, etc.) cannot exist within the realm of classical mechanics in equilibrium. It is quite easy to prove actually, so I’ll quickly sketch the main ideas. Firstly, the Hamiltonian with any electromagnetic field can be written in the form: $H = \sum_i \frac{1}{2m_i}(\textbf{p}_i - e\textbf{A}_i)^2 + U_i(\textbf{r}_i)$ Now, because the classical partition function is of the form: $Z \propto \int_{-\infty}^\infty d^3\textbf{r}_1...d^3\textbf{r}_N\int_{-\infty}^\infty d^3\textbf{p}_1...d^3\textbf{p}_N e^{-\beta\sum_i \frac{1}{2m_i}(\textbf{p}_i - e\textbf{A}_i)^2 + U_i(\textbf{r}_i)}$ we can just make the substitution: $\textbf{p}'_i = \textbf{p}_i - e\textbf{A}_i$ without having to change the limits of the integral. Therefore, with this substitution, the partition function ends up looking like one without the presence of the vector potential (i.e. the partition function is independent of the vector potential and therefore cannot exhibit any magnetism!). This theorem suggests, like in the Gibbs paradox case, that there is a logical inconsistency when one tries to apply macroscale physics (classical mechanics) to the microscale and attempts to build up from there (by applying statistical mechanics). The impressive thing about this kind of reasoning is that it requires little experimental input but nonetheless exhibits far-reaching consequences regarding a prevailing paradigm (in this case, classical mechanics). Since the quantum mechanical revolution, it seems like we have the opposite problem, however. Quantum mechanics resolves both the Gibbs paradox and the Bohr-van Leeuwen theorem, but presents us with issues when we try to apply the microscale ideas to the macroscale! What I mean is that while quantum mechanics is the rule of law on the microscale, we arrive at problems like the Schrodinger cat when we try to apply such reasoning on the macroscale. Furthermore, Bell’s theorem seems to disappear when we look at the world on the macroscale. One wonders whether such ideas, similar to the Gibbs paradox and the Bohr-van Leeuwen theorem, are subtle precursors suggesting where the limits of quantum mechanics may actually lie. ## An Interesting Research Avenue, an Update, and a Joke An Interesting Research Avenue: A couple months ago, Stephane Mangin of the Insitut Jean Lamour gave a talk on all-optical helicity-dependent magnetic switching (what a mouthful!) at Argonne, which was fascinating. I was reminded of the talk yesterday when a review article on the topic appeared on the arXiv. The basic phenomenon is that in certain materials, one is able to send in a femtosecond laser pulse onto a magnetic material and switch the direction of magnetization using circularly polarized light. This effect is reversible (in the sense that circularly polarized light in the opposite direction will result in a magnetization in the opposite direction) and is reproducible. During the talk, Mangin was able to show us some remarkable videos of the phenomenon, which unfortunately, I wasn’t able to find online. The initial study that sparked a lot of this work was this paper by Beaurepaire et al., which showed ultrafast demagnetization in nickel films in 1996, a whole 20 years ago! The more recent study that triggered most of the current work was this paper by Stanciu et al. in which it was shown that the magnetization direction could be switched with a circularly polarized 40-femtosecond laser pulse on ferromagnetic film alloys of GdFeCo. For a while, it was thought that this effect was specific to the GdFeCo material class, but it has since been shown that all-optical helicity-dependent magnetic switching is actually a more general phenomenon and has been observed now in many materials (see this paper by Mangin and co-workers for example). It will be interesting to see how this research plays out with respect to the magnetic storage industry. The ability to read and write on the femtosecond to picosecond timescale is definitely something to watch out for. Update: After my post on the Gibbs paradox last week, a few readers pointed out that there exists some controversy over the textbook explanation that I presented. I am grateful that they provided links to some articles discussing the subtleties involved in the paradox. Although one commenter suggested Appendix D of E. Atlee Jackson’s textbook, I was not able to get a hold of this. It looks like a promising textbook, so I may end up just buying it, however! The links that I found helpful about the Gibbs paradox were Jaynes’ article (pdf!) and this article by R. Swendsen. In particular, I found Jaynes’ discussion of Whifnium and Whoofnium interesting in the role that ignorance and knowledge plays our ability to extract work from a partitioned gases. Swendsen’s tries to redefine entropy classically (what he calls Boltzmann’s definition of entropy), which I have to think about a little more. But at the moment, I don’t think I buy his argument that this resolves the Gibbs paradox completely. A Joke: Q: What did Mrs. Cow say to Mr. Cow? A: Hubby, could you please mooo the lawn? Q: What did Mr. Cow say back to Mrs. Cow? A: But, sweetheart, then what am I going to eat? Thomas Kuhn, the famous philosopher of science, envisioned that scientific revolutions take place when “an increasing number of epicycles” arise, resulting in the untenability of a prevailing theory. Just in case you aren’t familiar, the “epicycles” are a reference to the Ptolemaic world-view with the earth at the center of the universe. To explain the trajectories of the other planets, Ptolemaic theory required that the planets circulate the earth in complicated trajectories called epicycles. These convoluted epicycles were no longer needed once the Copernican revolution took place, and it was realized that our solar system was heliocentric. This post is specifically about the Gibbs paradox, which provided one of the first examples of an “epicycle” in classical mechanics. If you google Gibbs paradox, you will come up with several different explanations, which are all seemingly related, but don’t quite all tell the same story. So instead of following Gibbs’ original arguments, I’ll just go by the version which is the easiest (in my mind) to follow. Imagine a large box that is partitioned in two, with volume V on either side, filled with helium gas of the same pressure, temperature, etc. and at equilibrium (i.e. the gases are identical). The total entropy in this scenario is $S + S =2S$. Now, imagine that the partition is removed. The question Gibbs asked himself was: does the entropy increase? Now, from our perspective, this might seems like an almost silly question, but Gibbs had asked himself this question in 1875, before the advent of quantum mechanics. This is relevant because in classical mechanics, particles are always distinguishable (i.e. they can be “tagged” by their trajectories). Hence, if one calculates the entropy increase assuming distinguishable particles, one gets the result that the entropy increases by $2Nk\textrm{ln}2$. This is totally at odds with one’s intuition (if one has any intuition when it comes to entropy!) and the extensive nature of entropy (that entropy scales with the system size). Since the size of the larger container of volume $2V$ containing identical gases (i.e. same pressure and temperature) does not change when removing the partition, neither should the entropy. And most damningly, if one were to place the partition back where it was before, one would naively think that the entropy would return to $2S$, suggesting that the entropy decreased when returning the partition. The resolution to this paradox is that the particles (helium atoms in this case) are completely indistinguishable. Gibbs had indeed recognized this as the resolution to the problem at the time, but considered it a counting problem. Little did he know that the seeds giving rise to this seemingly benign problem required the complete overthrow of classical mechanics in favor of quantum mechanics. Only in quantum mechanics do truly identical particles exist. Note that nowhere in the Gibbs paradox does it suggest what the next theory will look like – it only points out a severe shortcoming of classical mechanics. Looked at in this light, it is amusing to think about what sorts of epicycles are hiding within our seemingly unshakable theories of quantum mechanics and general relativity, perhaps even in plain sight.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880476713180542, "perplexity": 514.2189965748038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805687.20/warc/CC-MAIN-20171119153219-20171119173219-00711.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=9710&L=LATEX-L&D=0&H=A&S=a&P=198492
## [email protected] Options: Use Forum View Use Monospaced Font Show Text Part by Default Condense Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>] ```Hallo! > > >> Is it widely used? > > > > Pass! There was an amazingly high hit rate on the web reference site > > when V1.1 was first announced, but as we have received no bug reports > > since then we cannot be sure whether e-TeX is bug-free or unused! > > i fear the answer is "mainly unused" and the reason is simply that > that for most people there is no use for it (right now) as they are > not programmers but users. I think, there are not many users right *now*. *But* there is the TeX-Live-CD version2, on which will be an e-TeX (or the announcement is a lie...) and the new version of teTeX (0.9) will have an e-TeX. There are quite a large number of people following *and* updating with this distribution. So I expect the number of people actually having e-TeX will be significantly rising in the *near future*. We can't help people only using LaTeX209 nowadays, so I don't care if they would or will switch... [support of e-TeX by L2e-team] > depends on what is the meaning of "support" here. > > if it means can one use LaTeX with e-tex and use any of the features > then surely one can. if it means does LaTeX use any features of e-tex > then surely no. But then, why did *you* (the L2e-team) ask for 256 \mark-register, not only 16? > it is very simple, if we would use any feature right now then 99.9% of > all LaTeX users would suddenly find that they can't use LaTeX any more > and would be forced to upgrade. So, yes: LaTeX must be compatible with 'normal TeX' for some time. Sure... > but they would not upgrade as there is no compelling reason for them > to do so as we can't produce any functionality right now that would be > considered by the majority of users a good reason to switch to the new > system. what we can do is produce better and simpler code as some > things do work much nicer with e-tex functionality but this is nothing > a user cares if the result is the same (or mostly the same on his/her > level). Oh, I don't know, if I should agree here. There are some users, who would be happy about more stability and less bugs in L2e. It may be a small number, but there *are* users who complain to me that one bug in the L2e database is suspended and not fixed. I don't see that this bug could be easily fixed and since there is a workaround, I told them: Be happy with that, the L2e-team has more important stuff to do and besides, you haven't paid for it, so don't complain about unfixed bugs... > so it doesn't make sense for us to switch 2e onto e-tex and if our > core is on tex then we can't do development that could be released as > packages using e-tex either. if we would do this then we would work > for nearly nobody and for a long time none of our developments would > be tested or used. That's silly: 'We don't waste our times for e-TeX-L2e-packages. So there *is* no reason to switch. So noone switches. So we don't waste our times for e-TeX-L2e-packages.' Sure, there must be two L2e-codes: the compatibility-code and the more stable e-TeX-code for features available in both versions... And there can be packages only working with e-TeX, because it is too difficult (or impossible) in normal TeX (this mainly for future versions of e-TeX). > in my opinion a combination of etex and omega (and pdf support) > however could be the answer at least it seems to me a very good case. Ok, I will tell you the problem with Omega: Mr. Plaice told it: He doesn't care for compatibility with TeX (at least, he didn't at EuroTeX 95). So, that's a reason, why I never would *switch* to Omega. But I will *switch* to e-TeX as soon as the TeX-Live-CD arrives here (will be next week). > Phil has asked what features i miss that omega has. i'm not sure that > this was a serious question (you should know what your competing > successor is capable of, shouldn't you? :-) but in any case here are Successor? For a programm, which does not care for compatibility??? Ok, I'm not a member of the NTS group, but I think, some ideas
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884973526000977, "perplexity": 4521.062475104075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00428.warc.gz"}
http://mathoverflow.net/revisions/107492/list
3 New answer to edited question. [This is now an answer to the edited question(s), with some details added.My answer to the original question is kept at the very end.] Firstly: The question is a good one, and it is not easy to find references onthis. I had spent too much time pondering about the failure of the double dualargument (see below) before I finally heard the arguement given in the lastsection below, indirectly from Fantechi, via Faber. Assume $X$ is smooth projective. Definition: An $S$-valued point in $M_I(X)$ is an $S$-flat coherent sheaf on$S\times X$, with stable fibres of rank one, and with determinant line bundleisomorphic to $\mathcal{O}_{S\times X}$, modulo isomorphism. (I do not know if this is what Bridgeland meant, but to me this is resonablystandard.) Comment: Stability for rank one means torsion free. Existence: Let $M(X)$ be the (Simpson) moduli space for stable rank onesheaves. Then $M_I(X)$ is the fibre over $\mathcal{O}_X$ for the determinantmap $M(X) \to \mathrm{Pic}(X)$. This map sends a sheaf $I$ (stable rank onefibres) on $S\times X$ to the determinant line bundle $\det(I)$, and it istrivial as a point in $\mathrm{Pic}(X)$ if it is of the form $p^*L$ with$L\in\mathrm{Pic}(S)$. Then $I\otimes p^*L^{-1}$ is equivalent to $I$in $M(X)(S)$, and it has trivial determinant. This shows that $M_I(X)$indeed is a fibre of the determinant map. Of course the determinant of an ideal $I_Y\subset \mathcal{O}_X$ is nontrivialif $Y$ is a non principal divisor, so you cannot map such ideals to $M_I(X)$.In any case, the ideal of a divisor, without the embedding, would onlyremember the linear equivalence class. For brevity, let $\mathrm{Hilb}(X)$ be the part of the Hilbert schemeparametrizing subschemes $Y\subset X$ of codimension at least $2$.Then there is a natural map $F: \mathrm{Hilb}(X) \to M(X)$ that sends an ideal$I_Y\subset\mathcal{O}_{S\times X}$ to $I_Y$, forgetting the embedding. Since $Y$ isflat, so is $I_Y$, and its fibres are torsion free (by flatness again) of rankone. By the codimension assumption, the determinant of $I$ is trivial. Theorem: $F$ is an isomorphism. Comment: In the literature one sometimes finds the argument that if $I$ is arank one torsion free sheaf with trivial determinant, then $I$ embeds into itsdouble dual, which coincides with its determinant $\mathcal{O}_X$. Thisestablishes bijectivity on points. (For Hilbert schemes of points on surfacesthis is enough to conclude, since you can check independently that both$\mathrm{Hilb}(X)$ and $M_I(X)$ are smooth, and that the induced map on tangentspaces is an isomorphism.) I do not know how to make sense of this argument infamilies. Sketch proof of theorem: The essential point is to show that every $I$ in$M_I(X)(S)$ has a canonical embedding into $\mathcal{O}_{S\times X}$ such that thequotient is $S$-flat. Let $U\subset S\times X$ be the open subset where $I$ is locally free. Itscomplement has codimension at least $2$ in all fibres. By the trivialdeterminant assumption, the restriction of $I$ to $U$ is trivial. By codimension $2$,the trivialization extends to a map $I\to \mathcal{O}_{S\times X}$. This map is injective,in fact injective in all fibres: The restriction to eachfibre ${s}\times X$ is nonzero (as $U$ intersectsall fibres) and hence an embedding ($I$ is torsion free in fibres). Itfollows that the quotient is flat. There are some details to check, butthis is the main point, I think. [End of new answer, here is the original one:] (I have not studied Bridgeland's paper, so I do not know the intended meaning there.) 2 Forgot: $I_Z$ is $S$-flat If we attempt to define $M_I(X)(S)$ as the set of $S$-flat ideals $I_Z$ in $\mathcal{O}_{S\times X}$, then that would not be functorial in $S$, as the inclusion $I_Z \subset \mathcal{O}_{S\times X}$ may not continue to be injective after base change (in the counter example in the other answer, restriction to the problematic fibre gives the zero map). We could impose "universal injectivity", but that is just another way of requiring the quotient $\mathcal{O}_Z$ to be $S$-flat, so then we have (re)defined the Hilbert scheme. Another common way of defining moduli of ideals is as the moduli space for rank one stable sheaves (i.e. torsion free) with trivial determinant line bundle. The resulting moduli space is isomorphic to the Hilbert scheme of subschemes of codimension at least 2. (I have not studied Bridgeland's paper, so I do not know the intended meaning there.) 1 If we attempt to define $M_I(X)(S)$ as the set of ideals $I_Z$ in $\mathcal{O}_{S\times X}$, then that would not be functorial in $S$, as the inclusion $I_Z \subset \mathcal{O}_{S\times X}$ may not continue to be injective after base change (in the counter example in the other answer, restriction to the problematic fibre gives the zero map). We could impose "universal injectivity", but that is just another way of requiring the quotient $\mathcal{O}_Z$ to be $S$-flat, so then we have (re)defined the Hilbert scheme. Another common way of defining moduli of ideals is as the moduli space for rank one stable sheaves (i.e. torsion free) with trivial determinant line bundle. The resulting moduli space is isomorphic to the Hilbert scheme of subschemes of codimension at least 2. (I have not studied Bridgeland's paper, so I do not know the intended meaning there.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626110792160034, "perplexity": 351.2781010879298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706469149/warc/CC-MAIN-20130516121429-00064-ip-10-60-113-184.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/99542/bounded-and-unbounded-operator
# Bounded and Unbounded Operator Can someone explain with a concrete example of how can I can check whether a quantum mechanical operator is bounded or unbounded? EDIT: For example., I would like to check whether $\hat p=-i\hbar\frac{\partial}{\partial x}$ is bounded or not. - If you have a specific operator you want to examine, you should mention it. Otherwise, this is a very broad question and there is little to do except refer you to the definition and that of continuity of a linear operator. The class of bounded operators is so broad that any method other than the definition can only apply to a small fraction of it. – Emilio Pisanty Feb 17 '14 at 16:25 Comment to the question (v3): The momentum example proportional to $\frac{d}{dx}$ is mentioned on the Wikipedia page. – Qmechanic Feb 17 '14 at 19:44 A linear operator $A: D(A) \to {\cal H}$ with $D(A) \subset {\cal H}$ a subspace and ${\cal H}$ a Hilbert space (a normed space could be enough), is said to be bounded if: $$\sup_{\psi \in D(A)\:, ||\psi|| \neq 0} \frac{||A\psi||}{||\psi||} < +\infty\:.$$ In this case the LHS is indicated by $||A||$ and it is called the norm of $A$. Notice that, therefore, boundedness, is not referred to the set of values $A\psi$, which is always unbounded if $A\neq 0$, as $||A\lambda\psi|| = |\lambda|\: ||A\psi||$ for $\psi \in D(A)$ and $\lambda$ can be chosen arbitrarily large still satisfying $\lambda \psi \in D(A)$ since $D(A)$ is an subspace. It is possible to prove that $A: D(A) \to {\cal H}$ is bounded if and only if, for every $\psi_0 \in D(A)$: $$\lim_{\psi \to \psi_0} A\psi = A \psi_0\:.$$ Another remarkable result is that a self-adjoint operator is bounded if and only if its domain is the whole Hilbert space. Regarding $A= \frac{d}{dx}$, first of all you should define its domain to discuss boundedness. An important domain is the space ${\cal S}(\mathbb R)$ of Schwartz functions since, if $-id/dx$ is defined thereon, it turns out Hermitian and it admits only one self-adjoint extension that it is nothing but the momentum operator. $d/dx$ on ${\cal S}(\mathbb R)$ is unbounded. The shortest way to prove it is passing to Fourier transform. Fourier transform is unitary so that it transforms (un)bounded operators into (un)bounded operators. ${\cal S}(\mathbb R)$ is invariant under Fourier transform, and $d/dx$ is transformed to the multiplicative operator $ik$ I henceforth denote by $\hat A$. So we end up with studying boundedness of the operator: $$(\hat A \hat{\psi})(k) = ik \hat{\psi}(k)\:,\quad \hat\psi \in {\cal S}(\mathbb R)\:.$$ Fix $\hat\psi_0 \in {\cal S}(\mathbb R)$ with $||\hat\psi_0||=1$ assuming that $\hat\psi_{0}$ vanishes outside $[0,1]$ (there is always such function as $C_0^\infty(\mathbb R) \subset {\cal S}(\mathbb R)$ and there is a function of the first space supported in every compact set in $\mathbb R$), and consider the class of functions $$\hat\psi_n(k):= \hat \psi_{0}(k- n)$$ Obviously, $\hat\psi_n \in {\cal S}(\mathbb R)$ and translational invariance of the integral implies $||\hat\psi_n||=||\hat\psi_0||=1$. Next, notice that: $$\frac{||\hat A\hat\psi_n||^2}{||\hat\psi_n||^2} = \int_{[n, n+1]} |x|^2 |\hat\psi_{0}(k-n)|^2 dk \geq \int_{[n, n+1]} n^2 |\hat\psi_{0}(k-n)|^2 dk$$ $$= n^2 \int_{[0,1]} |\hat\psi_{0}(k)|^2 dk = n^2\:.$$ We conclude that: $$\sup_{\hat{\psi} \in {\cal S}(\mathbb R)\:, ||\hat{\psi}||\neq 0} \frac{||\hat A\hat\psi||}{||\hat\psi||} \geq n \quad \forall n\in \mathbb N$$ So $\hat A$ is unbounded and $A$ is consequently. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865908622741699, "perplexity": 151.3281603571801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281363.50/warc/CC-MAIN-20160524002121-00061-ip-10-185-217-139.ec2.internal.warc.gz"}
http://comunidadwindows.org/standard-error/standard-error-deviation-confidence-interval.php
Home > Standard Error > Standard Error Deviation Confidence Interval # Standard Error Deviation Confidence Interval ## Contents When you use standard error to calculate confidence interval, you are dealing with the distribution of the sample mean. If σ is not known, the standard error is estimated using the formula s x ¯   = s n {\displaystyle {\text{s}}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}} where s is the sample We do not know the variation in the population so we use the variation in the sample as an estimate of it. List of OpenAthens registered sites, including contact details. Check This Out Sign up today to join our community of over 11+ million scientific professionals. See comments below.) Note that standard errors can be computed for almost any parameter you compute from data, not just the mean. If you want to show the precision of the estimation then show the CI. Perspect Clin Res. 3 (3): 113–116. ## Standard Deviation And Standard Error Formula The standard error (SE) is the standard deviation of the sampling distribution of a statistic,[1] most commonly of the mean. When you're not practicing, someone else is getting better. - Allen Iverson cfageist Jan 9th, 2013 11:56am CFA Level I Candidate 107 AF Points You always use the standard error to Confidence interval for a proportion In a survey of 120 people operated on for appendicitis 37 were men. So whether to include SD or SE depends on what you want to show. It remains that standard deviation can still be used as a measure of dispersion even for non-normally distributed data. Another way of considering the standard error is as a measure of the precision of the sample mean.The standard error of the sample mean depends on both the standard deviation and Hot Network Questions Why is the size of my email so much bigger than the size of its attached files? Standard Error In Excel THE SE/CI is a property of the estimation (for instance the mean). For instance, if a surgeon collects data for 20 patients with soft tissue sarcoma and the average tumor size in the sample is 7.4 cm, the average does not provide a good Figure 2 shows the relation between the population mean, the sampling distribution of the means, and the mean and standard error of the parameter in the sample.Fig. 1One hundred samples drawn from a The graphs below show the sampling distribution of the mean for samples of size 4, 9, and 25. Warsaw R-Ladies Notes from the Kölner R meeting, 14 October 2016 anytime 0.0.4: New features and fixes 2016-13 ‘DOM’ Version 0.3 Building a package automatically The new R Graph Gallery Network But its standard error going to zero isn't a consequence of (or equivalent to) the fact that it is consistent, which is what your answer says. –Macro Jul 15 '12 at Standard Error In R Are assignments in the condition part of conditionals a bad practice? For this purpose, she has obtained a random sample of 72 printers and 48 farm workers and calculated the mean and standard deviations, as shown in table 1. We can estimate how much sample means will vary from the standard deviation of this sampling distribution, which we call the standard error (SE) of the estimate of the mean. ## When To Use Standard Deviation Vs Standard Error Table 2: Probabilities of multiples of standard deviation for a normal distribution Number of standard deviations (z) Probability of getting an observation at least as far from the mean (two sided Get More Info How many standard deviations does this represent? Standard Deviation And Standard Error Formula This random variable is called an estimator. Standard Error And Standard Deviation Difference Stainless Steel Fasteners Does Wi-Fi traffic from one client to another travel via the access point? This gives 9.27/sqrt(16) = 2.32. his comment is here They may be used to calculate confidence intervals. cfageist Jan 15th, 2013 11:49am CFA Level I Candidate 107 AF Points I believe it is different. This would give an empirical normal range . Standard Error Vs Standard Deviation Example The unbiased standard error plots as the ρ=0 diagonal line with log-log slope -½. Regain Access - You can regain access to a recent Pay per Article purchase if your access period has not yet expired. The problem is that when conducting a study we have one sample (with multiple observations), eg, s1 with mean m1 and standard deviation sd1, but we do not have or sdm. this contact form Sampling from a distribution with a small standard deviation The second data set consists of the age at first marriage of 5,534 US women who responded to the National Survey of Edwards Deming. Error And Deviation In Chemistry Huge bug involving MultinormalDistribution? The standard deviation of the means of those samples is the standard error. ## Standard error is instead related to a measurement on a specific sample. If the sample size is small (say less than 60 in each group) then confidence intervals should have been calculated using a value from a t distribution. The researchers report that candidate A is expected to receive 52% of the final vote, with a margin of error of 2%. It is important to check that the confidence interval is symmetrical about the mean (the distance between the lower limit and the mean is the same as the distance between the Standard Error Calculator Of course, T / n {\displaystyle T/n} is the sample mean x ¯ {\displaystyle {\bar {x}}} . The next graph shows the sampling distribution of the mean (the distribution of the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. In fact, data organizations often set reliability standards that their data must reach before publication. Correction for correlation in the sample Expected error in the mean of A for a sample of n data points with sample bias coefficient ρ. navigate here Imagine taking repeated samples of the same size from the same population. doi:  10.1136/bmj.331.7521.903PMCID: PMC1255808Statistics NotesStandard deviations and standard errorsDouglas G Altman, professor of statistics in medicine1 and J Martin Bland, professor of health statistics21 Cancer Research UK/NHS Centre for Statistics in Medicine, The variation depends on the variation of the population and the size of the sample. For an upcoming national election, 2000 voters are chosen at random and asked if they will vote for candidate A or candidate B. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, The standard error of a proportion and the standard error of the mean describe the possible variability of the estimated value based on the sample around the true proportion or true National Center for Health Statistics (24). Despite the small difference in equations for the standard deviation and the standard error, this small difference changes the meaning of what is being reported from a description of the variation However, different samples drawn from that same population would in general have different values of the sample mean, so there is a distribution of sampled means (with its own mean and doi:10.2307/2682923. mean standard-deviation standard-error basic-concepts share|improve this question edited Aug 9 '15 at 18:41 gung 74.6k19162312 asked Jul 15 '12 at 10:21 louis xie 413166 4 A quick comment, not an However, the sample standard deviation, s, is an estimate of σ. About 95% of observations of any distribution usually fall within the 2 standard deviation limits, though those outside may all be at one end. Is powered by WordPress using a bavotasan.com design. Using a sample to estimate the standard error In the examples so far, the population standard deviation σ was assumed to be known. Login via Your Institution Login via your institution : You may be able to gain access using your login credentials for your institution. Comments are closed. The series of means, like the series of observations in each sample, has a standard deviation. But this is very rarely done, unfortunately. This change is tiny compared to the change in the SEM as sample size changes. –Harvey Motulsky Jul 16 '12 at 16:55 @HarveyMotulsky: Why does the sd increase? –Andrew In regression analysis, the term "standard error" is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the In other words, it is the standard deviation of the sampling distribution of the sample statistic.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883526086807251, "perplexity": 824.6288867153683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812978.31/warc/CC-MAIN-20180220145713-20180220165713-00619.warc.gz"}
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535301
Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535301 Title: Bi-flagellate swimming dynamics Author: O'Malley, Stephen ISNI:       0000 0004 2705 6186 Awarding Body: University of Glasgow Current Institution: University of Glasgow Date of Award: 2011 Availability of Full Text: Access from EThOS: Access from Institution: Abstract: The propulsion of low Reynolds number swimmers has been widely studied, from the swimming sheet models of Taylor (1951), which were analogous to swimming spermatozoa, to more recent studies by Smith (2010) who coupled the boundary element method and method of regularised Stokeslets to look at cilia and flagella driven flow. While the majority of studies have investigated the propulsion and hydrodynamics of spermatozoa and bacteria, very little has been researched on bi-flagellate green algae. Employing an immersed boundary algorithm and a flexible beat pattern Fauci~(1993) constructed a model of a free-swimming algal cell. However, the two-dimensional representation tended to over-estimate the swimming speed of the cell. Jones~\etal~(1994) developed a three-dimensional model for an idealised bi-flagellate to study the gyrotactic behaviour of bottom-heavy swimmers. However, the un-realistic cell geometry and use of resistive force theory only offered order of magnitude accuracy. In this thesis we, investigate the hydrodynamics of swimming bi-flagellates via the application of the method of regularised Stokeslets, and obtain improved estimates for swimming speed and behaviour. Furthermore, we consider three-dimensional models for bi-flagellate cells with realistic cell geometries and flagellar beats. We investigate the behaviour of force- and torque-free swimmers with bottom-heavy spheroidal bodies and two flagella located at the anterior end of the cell body, which beat in a breast stroke motion. The cells exhibit gravitactic and gyrotactic behaviour, which result in cells swimming upwards on average in an ambient fluid and also towards regions of locally down-welling fluid, respectively. In order to compare how important the intricacies of the flagellar beat are to a cell's swimming dynamics we consider various beat patterns taken from experimental observations of the green alga \Rein~and idealised approaches from the literature. We present models for the bi-flagellate swimmers as mobility problems, which can be solved to obtain estimates for the instantaneous translational and angular velocities of the cell. The mobility problem is formulated by coupling the method of regularised Stokeslets with the conditions that there is no-slip on the surface of the body and flagella of the cell and that there exists a balance between external and fluid forces and torques. The method of regularised Stokeslets is an approach to computing Stokes flow, where the solutions of Stokes equations are desingularised. Furthermore, by modelling the cells as self-propelled spheroids we outline an approach to estimate the mean effective behaviour of cells in shear flows. We first investigate bi-flagellate swimming in a quiescent fluid to obtain estimates for the mean swimming speed of cells, and demonstrate that results for the three-dimensional model are consistent with estimates obtained from experimental observations. Moreover, we explore the various mechanisms that cells may use to re-orientate and conclude that gyrotactic and gravitactic re-orientation is due to a combination of shape and mass asymmetry, with each being equally important and complimentary. Next, we compare the flow fields generated by our simulations with some recent experimental observations of the velocity fields generated by free-swimming \rein, highlighting that simulations capture the same characteristics of the flow found in the experimental work. We also present our own experiments for \rein~and \Dunny~detailing the trajectories and instantaneous swimming speeds for free-swimming cells, and flow fields for trapped cells. Furthermore, we construct flagellar beats based upon experimental observations of \dunny~and \textit{D. bioculata}, which have different body shapes and flagellar beats than \chlamy. We then compare the estimates for swimming speed and re-orientation time with \rein, highlighting that, in general, \Dun~achieve greater swimming speeds, but take longer to re-orientate. The behaviour of cells in a shear flow is also investigated showing that for sufficiently large shear, vorticity dominates and cells simply tumble. Moreover, we obtain estimates for the effective cell eccentricity, which, contrary to previous hypotheses, shows that cells with realistic beat patterns swim as self-propelled spheres rather than self-propelled spheroids. We also present a technique for computing the effective eccentricity that reduces computational time and storage costs, as well as being applicable to unordered image data. Finally, we examine what effects interactions with boundaries, other cells, and obstacles have on a free-swimming cell. Here, we find that there are various factors which affect a cell's swimming speed, orientation and trajectory. The most important aspect is the distance between the interacting objects, but initial orientation and the flagella beat are also important. Free-swimming cells in an unbounded fluid typically behave as force-dipoles in the far field, and we find that for cell-to-cell and cell-to-obstacle interactions the far field behaviour is similar. However, swimming in the proximity of a boundary results in the flow field decaying faster. This implies that hydrodynamic interactions close to solid no-slip surfaces will be weaker than in an infinite fluid. Supervisor: Not available Sponsor: Not available Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral EThOS ID: uk.bl.ethos.535301  DOI: Not available Keywords: QA Mathematics Share:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8132846355438232, "perplexity": 2103.657841558388}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00093.warc.gz"}
https://chemistry.stackexchange.com/questions/49824/value-of-mixing-ratio-and-rate-constant-in-chapman-mechanisn
# Value of mixing ratio and rate constant in Chapman mechanisn In the Chapman mechanism (which tells about the activities of ozone in the stratosphere), there is a equation for the lifetime ($\tau_{\ce{O}}$) of an oxygen atom, $\ce{O}$ against conversion to ozone, $\ce{O3}$, through a certain reaction is given as $$\tau_{\ce{O}} = \frac{1}{k_2 C_{\ce{O_2}}n_a^2},$$ where $k_2$ is the rate constant of a reaction involving consumption of $\ce{O2}$ to form $\ce{O3}$, $$\ce{O + O2 + M -> O3 + M},$$ $\ce{M}$ is a third body present during the reaction. $C_{\ce{O2}}$ is the mixing ratio of $\ce{O2}$ (which is given as $0.21~\mathrm{mol/mol}$), $n_a$ is the air number density. Source: Daniel J. Jacob: Introduction to atmospheric Chemistry. 1999, To be published. Chapter 10. I am confused regarding this mixing ratio of $\ce{O2}$. Shouldn't be this value in the stratosphere different from that in troposphere? Also, regarding $k_2$, nothing is mentioned about how and where this was measured. I mean if the value of $k_2$ is derived through laboratory experiments, then how reliable is this value for the reactions happening in the stratosphere? • 1. The calculations you refer to are only valid within the "low-pressure limit". 2. The steady state approximation fails, as it overestimates the density of the ozone layer by at least a factor of two. 3. It is explicitly stated that "in the lower stratosphere, a steady state solution [...] would not [...] be expected [...]". 4. The kinetics of certain reactions are probably not so much influenced by gravity as some of the other necessary simplifications, so conducting the experiments in a lab probably yields very reliable results as opposed to the theory. – Martin - マーチン Apr 22 '16 at 12:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927001953125, "perplexity": 534.4365226872319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00531.warc.gz"}
https://www.physicsforums.com/threads/put-some-light-on-my-concept-of-light.613884/
# Put some light on my concept of light. 1. Jun 14, 2012 ### Zubeen In the way we represent an electromagnetic wave (click on http://upload.wikimedia.org/wikipedia/commons/4/4c/Electromagneticwave3D.gif) we limit the magnetic field vector and electric field vector till the limit of the amplitude of the wave. But how is it possible ??? because, Magnetic and INDUCED electric field, both are continuous in nature..... and further even if we are limiting them ..... there can be only 1 such arrangement as like in capacitor for electric field....( parallel plates ). The light, as an electromagnetic wave really makes me !!!!!! because of its complications and such surprising results.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98912513256073, "perplexity": 1042.623336781441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646213.26/warc/CC-MAIN-20180319023123-20180319043123-00255.warc.gz"}
https://infoscience.epfl.ch/record/221109
Infoscience Journal article # Neoclassical tearing mode seeding by coupling with infernal modes in low-shear tokamaks A numerical and an analytical study of the triggering of resistive MHD modes in tokamak plasmas with low magnetic shear core is presented. Flat q profiles give rise to fast growing pressure driven MHD modes, such as infernal modes. It has been shown that infernal modes drive fast growing islands on neighbouring rational surfaces. Numerical simulations of such instabilities in a MAST-like configuration are performed with the initial value stability code XTOR-2F in the resistive frame. The evolution of magnetic islands are computed from XTOR-2F simulations and an analytical model is developed based on Rutherford's theory in combination with a model of resistive infernal modes. The parameter Delta' is extended from the linear phase to the non-linear phase. Additionally, the destabilizing contribution due to a helically perturbed bootstrap current is considered. Comparing the numerical XTOR-2F simulations to the model, we find that coupling has a strong destabilising effect on (neoclassical) tearing modes and is able to seed 2/1 magnetic islands in situations when the standard NTM theory predicts stability.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076152205467224, "perplexity": 2990.900105417374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00267-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/327736/copies-of-finite-sets-in-sets-of-positive-measure
# Copies of finite sets in sets of positive measure We say a set $A \subseteq \mathbb{R}^n$ contains the pattern of a finite set $B \subseteq \mathbb{R}^n$ if there exists a shift $t \in \mathbb{R}^n$ and scale $s > 0$ such that $t+sB \subseteq A$. I've read that if $A$ has positive measure, then for any choice of finite set $B$, $A$ contains the pattern of $B$, but proofs are never provided (they say it is "clear"), and I don't know how to show this rigorously. I was able to show this in the easier case when $B \subseteq \mathbb{R}$ is of the form $\{x,x+y,x+2y\}$ by using the Lebesgue density theorem and considering a sufficiently small ball around a density point intersected with $A$, say with measure at least 3/4 of that of the ball (anything strictly larger than 1/2 will do) and making a measure argument, but I needed the symmetry of the set $B$ around the center $x+y$ in order for my argument to work. How would one prove this fact in full generality? - Good description. But I don't see where you'd need the symmetry. If $A$ contains greater than a fraction $1-1/\#B$ of a ball $D$, then consider whether the intersection $\bigcup_{b\in B} ((A\cap D)-sb)$ can be empty. Here we take $s$ extremely small, so that $D-sb$ is almost the same as $D$. – Greg Martin Mar 11 '13 at 20:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573920369148254, "perplexity": 117.01816133212546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00063-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/energy-in-simple-harmonic-motion.228820/
# Energy in simple harmonic motion 1. Apr 14, 2008 ### kjartan 1. The problem statement, all variables and given/known data This is my first post here. Thanks for any and all replies! I apologize in advance if I haven't used the correct conventions, but I hope that this is legible. I will learn the correct conventions for future posts but was pressed for time here. Question: A harmonic oscillator has angular frequency w and amplitude A. (a) What are the magnitudes of the displacement and velocity when the elastic potential energy is equal to the kinetic energy? (assume that U = 0 at equilibrium) (b) (i) How often does this occur in each cycle? (ii) What is the time between occurrences? (c) At an instant when the displacement is equal to A/2, what fraction of the total energy of the system is kinetic and what fraction is potential? 2. Relevant equations E = (1/2)mv^2 + (1/2)kx^2 = (1/2)kA^2 3. The attempt at a solution (a) (1/2)mv^2 = (1/2)kx^2 --> x = +/- A/rad(2) (b) (i) 4 times (ii) x = Acos(wt + phi), choose phi = pi/2 so, x = Asin(wt) x/A= sin(wt) t = (1/w)arcsin(x/A) subst. from (a) gives us t = (1/w)arcsin(1/rad(2)) = (1/w)*(pi/4) So, change in t = pi/(2w) (c) E = K + U U = (1/2)kx^2 = (!/2)k(A/2)^2 = k(A^2/8) K = k(A^2/2) - k(A^2/8) = k(3A^2/8) so, K/E = 3/4 --> U/E = 1/4 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Apr 14, 2008 ### Tom Mattson Staff Emeritus OK. Note that you also have 2 solutions for v: $v=\pm \sqrt{k/2m}A$ (click on the equation if you want to see the code that generates it). Then you take the magnitude of each, as the problem asked you to. Since this is a 1D problem, the magnitude of a vector is just the absolute value. Agree. I agree with your change in t, but I'm not so sure you want to use the arcsin function here. The arcsin function is too restrictive to capture all the solutions of the trigonometric equation, as its range is only $[-\pi /2,\pi /2]$. By the way, you don't need to choose a value of $\phi$. You could also do it like this. $$A\cos(\omega t+\phi )=\pm\frac{A}{\sqrt{2}}$$ $$\omega t+\phi=...\frac{\pi}{4},\frac{3\pi}{4},\frac{5\pi}{4},\frac{7\pi}{4}...$$ $$\omega t+\phi=\frac{(2n+1)\pi}{4}$$, $$n\in\mathbb{Z}$$ Now take the $t$ values for any two adjacent $n$, and subtract the smaller from the greater. $\phi$ will subract off without you having to choose a value for it. That is correct. 3. Apr 16, 2008 ### kjartan Thanks for the tip about LaTex, too.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942941427230835, "perplexity": 817.0022740323662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/113519/electric-field-generated-by-a-point-charge-moving-at-the-speed-of-light
# Electric field generated by a point charge moving at the speed of light As you see, this is the electric field generated by a point charge moving at constant speed v. I know that when $v$ -> 0, $E$ is just the Coloumb Law. But how do you interpret $E$ when $v$ -> $c$ ? Can I just interpret it as the field of electromagnetic wave, because it moves at the speed of light? • @BMS The question given to me is very vague. I don't think I need to do the Taylor expansion, right? – Lawerance May 20 '14 at 5:58 • The homework question asks about a limit, whereas the title of the question refers to a charge moving at c. These are two different things. It's not possible for a charge to move at exactly c. All charged particles have mass, and massive particles can't move at c. – Ben Crowell Jul 24 '14 at 18:30 • But it is easy to imagine e.g. a massless Dirac field with electric charge. – Robin Ekman Aug 28 '14 at 9:45 • When $v^2/c^2$ is small, the $\sin^2 \theta$ doesn't matter much, but when $v^2/c^2$ gets close to 1, the $\sin^2 \theta$ will become very important. You'll find that the field will be small ahead and behind of the particle and much larger along the sides, where $\sin^2 \theta$ is closer to 1. I'll leave the math to you, but the effect is that the fields get flattened in the plane perpendicular to the charge's travel. – krs013 May 20 '14 at 6:27 • I got your idea. Basically, because $\sin^2\theta$ is symmetric, we can just first consider 0-90 degrees. And the rest is just arguing the theta. But why this is somehow predicted as stated in the problem? Is it because of the contribution of retarded position of the charge? – Lawerance May 20 '14 at 7:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156284093856812, "perplexity": 212.12027855521052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573735.34/warc/CC-MAIN-20190919204548-20190919230548-00381.warc.gz"}
http://mathhelpforum.com/geometry/141483-angle-between-curves-parametric-form.html
# Thread: Angle between curves in parametric form 1. ## Angle between curves in parametric form Hi, In a problem i have to find the angle between two curves that are in parametric form. Those curves are: (e^t*cos(t),e^t*sin(t)) and (R*cos(s),R*sin(s)) where s,t are in [0,2*Pi] and R>0 I can't even find their intersection point. I tried something but i had to exclude two values of s and t which are Pi/2 and 3*Pi/2 so i could obtain s=t 2. Hello, DBS! I've made a little progress . . . I have to find the angle between two parametric curves: . . $\begin{Bmatrix}x &=& e^t\cos t \\ y &=& e^t\sin t \end{Bmatrix} \qquad \begin{Bmatrix}x &=& R\cos s \\ y &=& R\sin s\end{Bmatrix}\quad \text{ where }s,t \in [0,\,2\pi]\:\text{ and }\:R>0$ I can't even find their intersection point. Equating $x$'s and $y$'s: . . $\begin{array}{ccccccccc} e^t\cos t &=& R\cos s & \Rightarrow & e^t &=& R\,\dfrac{\cos s}{\cos t} & [1] \\ \\[-3mm] e^t\sin t &=& R\sin s & \Rightarrow & e^t &=& R\,\dfrac{\sin s}{\sin t} & [2]\end{array}$ Equate [1] and [2]: . $R\,\frac{\cos s}{\cos t} \:=\:R\,\frac{\sin s}{\sin t} \quad\Rightarrow\quad \sin s\cos t - \cos s\sin t \:=\:0$ We have: . $\sin(s-t) \:=\:0 \quad\Rightarrow\quad s-t \:=\:\begin{Bmatrix}0 \\ \pi \end{Bmatrix}$ . . Hence: . $s \;=\;\begin{Bmatrix}t \\ t+\pi \end{Bmatrix}$ It turns out that $s \:=\:t +\pi$ is an extraneous root. Hence, the intersection occurs at $s \,=\,t$ Then [1] becomes: . $e^s \:=\:R\,\frac{\cos s}{\cos s} \quad\Rightarrow\quad e^s \:=\:R \quad\Rightarrow\quad s \:=\:\ln R$ Therefore, the curves intersect at: . $\bigg(R\cos(\ln R),\;R\sin(\ln R)\bigg)$ But check my work . . . please! . 3. Thanks very much! Actually what i find weird is that i got to s=t with a method but the problem is for that i got to tan(t)=tan(s) but i had to exclude the values that i told earlier...but your method seems correct enough without excluding values. So again, thank you. 4. Where can i learn to write like you did?...is it with latex by any chance? thx , # angle between two parametric curves Click on a term to search for related topics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366353750228882, "perplexity": 1017.6337922755354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00564-ip-10-145-167-34.ec2.internal.warc.gz"}
https://experts.syr.edu/en/publications/physics-based-modeling-of-experimental-data-encountered-in-cellul
# Physics-Based Modeling of Experimental Data Encountered in Cellular Wireless Communication Tapan K. Sarkar, Heng Chen, Magdalena Salazar-Palma, Ming Da Zhu Research output: Contribution to journalArticlepeer-review 2 Scopus citations ## Abstract This paper presents a physics-based macro model that can predict with a high degree of accuracy various experimental data available for the propagation path loss of radio waves in a cellular wireless environment. A theoretical macro model based on the classical Sommerfeld formulation can duplicate various experimental data including that of Okumura et al. carried out in 1968. It is important to point out that there are also many statistical models but they do not conform to the results of the available experimental data. Specifically, there are separate path loss propagation models available in the literature for waves traveling in urban, suburban, rural environments, and the like. However, no such distinction is made in the results obtained from the theoretical analysis and measured experimental data. Based on the analysis using the macro model developed after Sommerfeld's classic century-old analytical formulation, one can also explain the origin of slow fading which is due to the interference between the direct wave from the base station antenna and the ground wave emanating from the reflections of the direct wave and occurs only in the near field of the transmitting antenna. The so-called height gain occurs in the far field of a base station antenna deployment which falls generally outside the cell of interest, while in the near field, within the cell, there is a height loss of the field strength for observation points near the ground. A physical realization of the propagation mechanism is illustrated through Vander Pol's exact transformation of the Sommerfeld integrals for the potential to a spatial semiinfinite volume integral and thus illustrates why buildings, trees, and the like have little effects on the propagation mechanism. Original language English (US) 8510858 6673-6682 10 IEEE Transactions on Antennas and Propagation 66 12 https://doi.org/10.1109/TAP.2018.2878366 Published - Dec 2018 ## Keywords • Analysis of Wire Antennas and Scatterers (AWAS) • Schelkunoff formulation • Sommerfeld formulation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041872024536133, "perplexity": 833.31038295649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00148.warc.gz"}
http://xrpp.iucr.org/Bb/ch1o5v0001/app1o5o1/
International Tables for Crystallography Volume B Reciprocal space Edited by U. Shmueli International Tables for Crystallography (2010). Vol. B, ch. 1.5, p. 192   | 1 | 2 | doi: 10.1107/97809553602060000762 ## Appendix A1.5.1. Reciprocal-space groups M. I. Aroyoa* and H. Wondratschekb aDepartamento de Fisíca de la Materia Condensada, Facultad de Cienca y Technología, Universidad del País Vasco, Apartado 644, 48080 Bilbao, Spain , and bInstitut für Kristallographie, Universität, D-76128 Karlsruhe, Germany Correspondence e-mail:  [email protected] This table is based on Table 1 of Wintgen (1941). In order to obtain the Hermann–Mauguin symbol of from that of , one replaces any screw rotations by rotations and any glide reflections by reflections. The result is the symmorphic space group . For most space groups , the reciprocal-space group is isomorphic to , i.e. and belong to the same arithmetic crystal class. In the following cases is isomorphic to a symmorphic space group which is different from . Thus the arithmetic crystal classes of and are different, i.e. can not be obtained in this simple way: (1) If the lattice symbol of is F or I, it has to be replaced by I or F, e.g. is isomorphic to Imm2 for the arithmetic crystal class . The tetragonal space groups form an exception to this rule; for these the symbol I persists. (2) The other exceptions are listed in the following table (for the symbols of the arithmetic crystal classes see IT A, Section 8.2.3 ): ### References Wintgen, G. (1941). Zur Darstellungstheorie der Raumgruppen. Math. Ann. 118, 195–215.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006383419036865, "perplexity": 3530.9328237618042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593438.33/warc/CC-MAIN-20180722174538-20180722194538-00065.warc.gz"}
https://eccc.weizmann.ac.il/report/2011/004/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR11-004 | 10th January 2011 15:54 #### On the complexity of computational problems regarding distributions (a survey) TR11-004 Publication: 10th January 2011 15:54 Keywords: Abstract: We consider two basic computational problems regarding discrete probability distributions: (1) approximating the statistical difference (aka variation distance) between two given distributions, and (2) approximating the entropy of a given distribution. Both problems are considered in two different settings. In the first setting the approximation algorithm is only given samples from the distributions in question, whereas in the second setting the algorithm is given the ``code'' of a sampling device (for the distributions in question). We survey the know results regarding both settings, noting that they are fundamentally different: The first setting is concerned with the number of samples required for determining the quantity in question, and is thus essentially information theoretic. In the second setting the quantities in question are determined by the input, and the question is merely one of computational complexity. The focus of this survey is actually on the latter setting. In particular, the survey includes proof sketches of three central results regarding the latter setting, where one of these proofs has only appeared before in the second author's PhD Thesis. ### Comment(s): Comment #1 to TR11-004 | 10th March 2019 12:32 #### Errata re a too strong statement of Thm 1 Authors: Oded Goldreich Accepted on: 10th March 2019 12:32 Keywords: Comment: As pointed out by Itay Berman, Akshay Degwekar, Ron Rothblum and Prashant Vasudevan, the proof (sketched in Sec 5.1) for the general part of Thm 1 holds only for constant \$c\$ and \$f\$ such that \$c\$ is smaller than \$f^2\$. Nevertheless, they were able to prove the stated generalization using a more complex argument [see TR19-038]. As for the original proof, it calls for setting \$t\$ so that \$(f(n)^2/c(n))^t/2 \geq 8n\$ while assuming that \$c(n)^t \geq 1/\poly(n)\$. For \$c(n)\$ that is upper-bounded by a constant smaller than one, this assumption holds only if \$t=O(\log n)\$, which in turn implies that \$f(n)^2 /c(n)\$ must be lower-bounded a constant greater than one. ISSN 1433-8092 | Imprint
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398847818374634, "perplexity": 3094.361266008616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00492.warc.gz"}
http://math.stackexchange.com/users/113323/l
# L__ less info reputation 6 bio website location age 25 member for 11 months seen Nov 18 at 6:39 profile views 29 3 Set up but do not evaluate the integral representing the volume of the region $ax^2 + by^2 = r^2$ 2 what are the spherical coordinates 2 Radius of Curvature 2 Uncertainty in parallel resistors. 2 Proof by Induction - Sequence of integers # 380 Reputation +25 Set up but do not evaluate the integral representing the volume of the region $ax^2 + by^2 = r^2$ +10 Uncertainty in parallel resistors. -2 Vector expression for intersection of a reflected ray from a cylinder +10 Radius of Curvature # 0 Questions This user has not asked any questions # 33 Tags 7 integration × 5 2 algebra-precalculus 6 calculus × 7 2 discrete-mathematics 3 multivariable-calculus × 2 2 induction 3 volume 2 statistics 2 differential-geometry × 2 1 geometry × 3 # 7 Accounts Mathematics 380 rep 6 TeX - LaTeX 159 rep 5 Stack Overflow 131 rep 4 Ask Ubuntu 101 rep 1 Unix & Linux 101 rep 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238273501396179, "perplexity": 3052.7190311706604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405292026.28/warc/CC-MAIN-20141119135452-00039-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/break-frequencies-for-directly-coupled-transistors.190929/
# Break frequencies for directly coupled transistors 1. Oct 13, 2007 ### programmer 1. The problem statement, all variables and given/known data The problem asks to determine the break frequency at the base of Q2 (which is fine I have no problem with that), but the whole assignment never asks for the break frequency at the COLLECTOR of Q1, then later asks to produce a bode plot of the transfer function. See the image I've linked (pardon the sloppy paint job). 2. Relevant equations f(break) = 1/(2*pi*Ceq*Rth) 3. The attempt at a solution Well I thought since the capacitor equivalent at collector1 (Cmiller(output) + Cce) wasn't actually in parallel (due to the non-bypassed emitter resistors) with the capacitor equivalent at base2 (Cmiller(input) + Cbe), I would have to treat them seperately and produce two different break frequencies. Is this correct? 2. Oct 17, 2007 ### programmer still nothing?? Well, I've consulted with my colleague, and he told me that my interpretation of the miller-effect capacitance was incorrect. He told me that they always goes to ground (can anyone verify this??)...and he also told me to ignore the capacitances Cbe and Cce (ignore? never!! mwhahaha)... So looking at this NEW image: You can see now that the "break frequency due to the base of Q2" probably means the two miller capacitances added together since now they are truly in parallel. BUT, to be 100% correct, wouldn't I now need a break frequency due to Cmiller(combined), Cce, AND Cbe(3 total)...not to mention the capacitors due to the rest of the circuit. If I've confused anyone..I'm talking about break frequencies on Bode Plots of the gain vs. frequency response where the slope changes(20dB/decade...40dB/decade..so on), or the -3dB point on the actual transfer function (at least the first break frequency on the midband gain). Can anyone help me understand this??? 3. Oct 31, 2007 ### programmer Well after extensive research (PSpice and a graphing calculator), I've concluded that adding the Miller effect to the parasitic capacitances (treating them as "parallel") gives nearly the same transfer function results as treating them separately. I guess it's just another one of those electronic approximations that are far too common. Similar Discussions: Break frequencies for directly coupled transistors
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993135690689087, "perplexity": 2068.671660061105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00684.warc.gz"}
https://www.pveducation.org/zh-hans/biblio?f%5Bauthor%5D=296
# Biblio Export 3 results: Author Title Type [ Year] Filters: Author is Stefan W. Glunz  [Clear All Filters] 2007 , A review and comparison of different methods to determine the series resistance of solar cells, Solar Energy Materials and Solar Cells, vol. 91, pp. 1698 - 1706, 2007. 2001 , Degradation of carrier lifetime in Cz silicon solar cells, Solar Energy Materials and Solar Cells, vol. 65, pp. 219 - 229, 2001.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709121942520142, "perplexity": 1814.9287345952898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00039.warc.gz"}
https://ian-r-rose.github.io/bus-bunching-1.html
### Buses are bosons, or How I learned to stop worrying and love AC Transit (Part one) If you have spent any amount of time using mass transit, you know the frustration of waiting for the better part of an hour for a bus to arrive, only to see two or three of them roll up in quick succession. This phenomenon is a common enough problem that it has a name: "bus bunching": In this two-part series, we'll investigate bus bunching by making a mathematical model of a bus route. In this first part, we'll construct the model and find it's equilibrium solution. In the second part, we'll demonstrate the inevitability for that model to bunch. Let's start by constructing a model for the speed of a single bus. We will assume that the bus has a fixed route, on which it travels all day. That route may be on any number of different streets, go wending through different neighborhoods, and generally make very little sense (like my beloved 12 line). However, if it travels back and forth on this same route, we can model it as a loop, and its position on that loop can be mapped to an angle on a circle $$\theta$$. We can then identify the speed of the bus with the time derivative of $$\theta$$. The simplest model for $$d\theta/ dt$$ is for the bus to travel at a constant speed $$v_0$$: $$\frac{d \theta}{d t} = v_0$$ Or, expressed in a simulation: Now, this model isn't very interesting. There is only a single bus, and it is traveling at a fixed speed, so it has no hope of exhibiting the kind of bunching behavior that we want to explain. A typical mass transit route operates multiple buses with a given headway, the distance (or time) between successive vehicles on the route. If everything is operating according to the plan, the headway from one bus to the next should be approximately constant, with possible scheduled variations depending on the time of day (such as increased frequency during rush hour, or decreased frequency at night). We can increase the complexity of our model adding some more buses so that there are $$N$$ on the route: $$\frac{d \theta_n}{d t} = v_0 \label{constant}$$ In this equation the subscript indicates the $$n$$th bus out of $$N$$, so a simulation with five buses looks like this: Okay, so this is starting to more closely resemble a bus route. However, the buses are still moving at a constant speed, and have no effect on each other. In order for our model to exhibit the richer characteristics of a system that can bunch, there must be some way for their speed to be a function of conditions on the road. There are many factors that can control the speed of a bus traveling through town, including traffic, construction, scheduled layovers, and the number of passengers. In order to keep the model simple, we will focus on the last factor: the number of passengers who board and exit the bus. A traveling bus constantly picks up and drops off passengers as it makes its way around its route. This process takes time (as anyone who has watched a passenger fumble with change upon boarding knows). A bus that boards and deposits more passengers will, in general, make slower progress along its route. Many things affect the number of passengers boarding a given bus, including time-of-day, scheduling, and population density. In order to keep the model as simple as possible, we will ignore those and concentrate on a single factor: the amount of time since the previous bus. We will assume that as more time passes, more passengers arrive at a bus stop for pickup. If a bus falls behind schedule, more people will have arrived at each stop, meaning that it will be further slowed down by the excess passengers. In the following analysis, we will use the distance between buses as a proxy for the number of passengers that need to be picked up. We need to augment our model to account for this slowing-down behavior. The expression for speed in equation \eqref{constant} is a constant, so the next-simplest expression is to make it linear in the distance between buses (our proxy for the number of passengers): $$\frac{d\theta_n}{d t} = v_0 \left[ 1 - \gamma (\theta_{n+1} - \theta_n) \right] \label{evolution}$$ In this equation, a bus picking up no passengers travels at $$v_0$$ (which happens if there has been no time for them to accumulate since the previous bus). As the distance between a bus and the one ahead of it increases, the speed of the bus slows down, reflecting the additional time spent boarding and disembarking. The dimensionless parameter $$\gamma$$ determines how sensitive the bus speeds are to differences in headway. Equation \eqref{evolution} is a set of ordinary differential equations (one equation for each of the $$N$$ buses). It will be the primary evolution equation for our system of buses, which we will analyze by answering the following two questions. 1. Is there an equilibrium solution to these equations? That is to say, is there a solution that does not evolve in time? 2. If there is an equilibrium solution, is it stable? A stable solution, when perturbed, will return to the equilibrium. An unstable one will get further and further from equilibrium until the buses are bunched. Strictly speaking, an equilibrium solution does not exist for the system of equations as described: as long as the buses have a nonzero velocity, their positions will evolve in time. However, with a slight reframing of the question it makes sense to talk about an equilibrium: is there a configuration for which the bus velocities are constant, and that the distance between them (headways) are not changing? In a coordinate system traveling with the buses at equilibrium speed the solution to the system would then look like this: It seems intuitive that an equilibrium solution, if it exists, should have the buses equally spaced, so let's start looking for a solution of that form. Let's further guess that the equilibrium velocity is the base bus speed $$v_0$$. A change of coordinates makes this system a bit easier to reason about. Let's boost ourselves into moving a coordinate system $$\psi$$, defined by: $$\psi_n \equiv \theta_n - v_0 t$$ From this we can also get the relations $$\frac{d\theta_n}{d t} = \frac{d \psi_n}{d t} + v_0$$ $$\theta_n = \psi_n + v_0 t$$ Substituting these into equation \eqref{evolution}, we get the governing equations in terms of $$\psi_n$$: $$\frac{d \psi_n}{d t} + v_0 = v_0 \left[ 1 - \gamma (\psi_{n+1} - \psi_n) \right]$$ $$\frac{d \psi_n}{d t} = v_0 \gamma (\psi_{n} - \psi_{n+1})$$ When the buses are equally spaced around the loop, then the distance between them is the whole loop length divided between the number of buses, or $$\psi_{n+1} - \psi_n = 2 \pi/N$$, which makes the governing equations in the $$\psi$$ coordinates $$\frac{d \psi_n}{d t} = \frac{ 2 \pi v_0 \gamma }{N}$$ Unless the interaction term $$\gamma$$ is zero, the time evolution of $$\psi_n$$ is nonzero, making this configuration a non-equilibrium solution to the system. Therefore, $$v_0$$ is not the equilibrium velocity. This should make sense, as we defined $$v_0$$ to be the speed of the bus in the absence of any delays due to loading and unloading of passengers. When we include that delay, the buses will be slower. Instead, let's construct a speed for buses that takes into account the delay due to passengers. Again we presume that the buses are equally spaced, such that the distance between them is $$2 \pi/N$$. Then, given the evolution equation \eqref{evolution}, we can calculate the speed $$v_e$$: $$v_e = \frac{d \theta_n}{d_t} = v_0 \left[ 1 - \frac{2 \pi \gamma}{N} \right]$$ Let's boost into a new coordinate system $$\phi$$, defined by $$\phi_n \equiv \theta_n - v_e t$$ Substituting this into equation \eqref{evolution}, we find $$\frac{d \phi_n}{d t} + v_e = v_0 \left[ 1 - \gamma \left(\phi_{n+1} - \phi_n \right) \right]$$ As before, when the buses are equally spaced, $$\phi_{n+1} - \phi_n = 2 \pi/N$$: $$\frac{d \phi_n}{d t} + v_e = v_0 \left[ 1 - \frac{2 \pi \gamma}{N} \right]$$ The right-hand-side is exactly $$v_e$$, so we can subtract it from both sides to get $$\frac{d \phi_n}{d t} = 0$$ This is exactly what we wanted: in the $$\phi$$ coordinate system, the positions of the buses are constant in time, so equally-spaced buses are all in equilibrium. In order to get a feel for the equilibrium solution, you can experiment with this interactive simulation, which shows the buses traveling at their equilibrium speed and spacing (in the $$\phi$$ coordinate system that moves with them). You can see that as the number of buses increases and the headway between the buses gets smaller, the equilibrium speed increases, reflecting the decreased number of passengers each has to pick up. At this point we have answered the first of the two above questions: there is an equilibrium solution. In the next installment of this series, we are going to answer the second question: is this equilibrium solution stable? (Spoiler: it's not.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 14, "x-ck12": 0, "texerror": 0, "math_score": 0.8002095222473145, "perplexity": 371.5364216940911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210559.6/warc/CC-MAIN-20180816074040-20180816094040-00427.warc.gz"}
http://mathhelpforum.com/algebra/146971-indices-question.html
1. ## Indices question! Given that $10^{2n} \times 5^{4-3n} = 2^x \times 5^y$, express y in terms of n. Answer: $x = 2n$, $y = 4 - n$ I don't know how to begin, how do I get two answers from one given expression? Sorry if I sound dumb but any help will be greatly appreciated! Thanks in advance. 2. $10^{2n} \cdot 5^{4-3n} = 2^x \cdot 5^y$ $(2 \cdot 5)^{2n} \cdot 5^{4-3n} = 2^x \cdot 5^y$ $2^{2n} \cdot 5^{2n} \cdot 5^{4-3n} = 2^x \cdot 5^y$ $2^{2n} \cdot 5^{4-n} = 2^x \cdot 5^y$ finish up by equating exponents 3. Oh thank you very much skeeter! I haven't really done this sort of sum for a while so I completely forgot about that. Oops. 4. Note that is m and n could be any numbers, there would be an infinite number of solutions. This problem is assuming that m and n are positive integers. 5. Hi im stuck on a question, anyone can help? simplify 9^-1/2 x 8^2/3 6. (Removed as author of post #5 started a new thread.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485939741134644, "perplexity": 523.9293991254287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189903.83/warc/CC-MAIN-20170322212949-00115-ip-10-233-31-227.ec2.internal.warc.gz"}
http://stats.stackexchange.com/questions/44769/understanding-p-value/44859
# Understanding p-value I know that there are lots of materials explaining p-value. However the concept is not easy to grasp firmly without further clarification. Here is the definition of p-value from Wikipedia: The p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. (http://en.wikipedia.org/wiki/P-value) My first question pertains to the expression "at least as extreme as the one that was actually observed." My understanding of the logic underlying the use of p-value is the following: If the p-value is small, it's unlikely that the observation occurred assuming the null hypothesis and we may need an alternative hypothesis to explain the observation. If the p-value is not so small, it is likely that the observation occurred only assuming the null hypothesis and the alternative hypothesis is not necessary to explain the observation. So if someone wants to insist on a hypothesis he/she has to show that the p-value of the null hypothesis is very small. With this view in mind, my understanding of the ambiguous expression is that p-value is $\min[P(X<x),P(x<X)]$, if the PDF of the statistic is unimodal, where $X$ is the test statistic and $x$ is its value obtained from the observation. Is this right? If it is right, is it still applicable to use the bimodal PDF of the statistic? If two peaks of the PDF are separated well and the observed value is somewhere in the low probability density region between the two peaks, which interval does the p-value give the probability of? The second question is about another definition of p-value from Wolfram MathWorld: The probability that a variate would assume a value greater than or equal to the observed value strictly by chance. (http://mathworld.wolfram.com/P-Value.html) I understood that the phrase "strictly by chance" should be interpreted as "assuming a null hypothesis". Is that right? The third question regards the use of "null hypothesis". Let's assume that someone wants to insist that a coin is fair. He expresses the hypothesis as that relative frequency of heads is 0.5. Then the null hypothesis is "relative frequency of heads is not 0.5." In this case, whereas calculating the p-value of the null hypothesis is difficult, the calculation is easy for the alternative hypothesis. Of course the problem can be resolved by interchanging the role of the two hypotheses. My question is that rejection or acceptance based directly on the p-value of the original alternative hypothesis (without introducing the null hypothesis) is whether it is OK or not. If it is not OK, what is usual workaround for such difficulties when calculating the p-value of a null hypothesis? I posted a new question that is more clarified based on the discussion in this thread. - Of possible interest: Is there an error in the one-sided binomial test in R? –  user10525 Nov 30 '12 at 13:17 You have caught a subtlety that often goes unrecognized: "more extreme" needs to be measured in terms of relative likelihood of the alternative hypothesis rather than in the obvious (but not generally correct) sense of being further out in the tail of the null sampling distribution. This is explicit in the formulation of the Neyman-Pearson Lemma, which is used to justify many hypothesis tests and to determine their critical regions (and whence their p-values). Thinking this through will help answer your first question. –  whuber Nov 30 '12 at 14:33 As I recall, the Neyman-Pearson Lemma is optimal for simple vs. simple hypothesis tests (Ho: mu=mu_0, Ha: mu=mu_a). For composite tests (Ho: mu=mu_0, Ha: mu>mu_a) there is an alternative test. –  RobertF Dec 3 '12 at 15:22 In the immortal words of David W. Hogg, "Holy shit, p-values are confusing!" –  abaumann May 28 '13 at 16:39 You have to think at the concept of extreme in terms of probability of the test statistics, not in terms of its value or the value of the random variable being tested. I report the following example from Christensen, R. (2005). Testing Fisher, Neyman, Pearson, and Bayes. The American Statistician, 59(2), 121–126 $$\phantom{(r\;|\;\theta=0}r\; | \quad 1 \quad \quad 2 \quad \quad 3 \quad \quad 4\\ p(r\;|\;\theta=0) \; |\; 0.980\;0.005\; 0.005\; 0.010\\ \quad p\;\mathrm{value} \; \; | \;\; 1.0 \quad 0.01 \quad 0.01 \;\; 0.02$$ Here $r$ are the observations, the second line is the probability to observe a given observation under the null hypothesis $\theta=0$, that is used here as test statistics, the third line is the $p$ value. We are here in the framework of Fisherian test: there is one hypothesis ($H_0$, in this case $\theta=0$) under which we want to see whether the data are weird or not. The observations with the smallest probability are 2 and 3 with 0.5% each. If you obtain 2, for example, the probability to observe something as likely or less likely ($r=2$ and $r=3$) is 1%. The observation $r=4$ does not contribute to the $p$ value, although it's further away (if an order relation exists), because it has higher probability to be observed. This definition works in general, as it accommodates both categorical and multidimensional variables, where an order relation is not defined. In the case of a ingle quantitative variable, where you observe some bias from the most likely result, it might make sense to compute the single tailed $p$ value, and consider only the observations that are on one side of the test statistics distribution. I disagree entirely with this definition from Mathworld. I have to say that I'm not completely sure I understood your question, but I'll try to give a few observations that might help you. In the simplest context of Fisherian testing, where you only have the null hypothesis, this should be the status quo. This is because Fisherian testing works essentially by contradiction. So, in the case of the coin, unless you have reasons to think differently, you would assume it is fair, $H_0: \theta=0.5$. Then you compute the $p$ value for your data under $H_0$ and, if your $p$ value is below a predefined threshold, you reject the hypothesis (proof by contradiction). You never compute the probability of the null hypothesis. With the Neyman-Pearson tests you specify two alternative hypotheses and, based on their relative likelihood and the dimensionality of the parameter vectors, you favour one or another. This can be seen, for example, in testing the hypothesis of biased vs. unbiased coin. Unbiased means fixing the parameter to $\theta=0.5$ (the dimensionality of this parameter space is zero), while biased can be any value $\theta \neq 0.5$ (dimensionality equal to one). This solves the problem of trying to contradict the hypothesis of bias by contradiction, which would be impossible, as explained by another user. Fisher and NP give similar results when the sample is large, but they are not exactly equivalent. Here below a simple code in R for a biased coin. n <- 100 # trials p_bias <- 0.45 # the coin is biased k <- as.integer(p_bias * n) # successes # value obtained by plugging in the MLE of p, i.e. k/n = p_bias lambda <- 2 * n * log(2) + 2 * k * log(p_bias) + 2 * (n-k) * log(1. - p_bias) p_value_F <- 2 * pbinom(k, size=n, prob=0.5) # p-value under Fisher test p_value_NP <- 1 - pchisq(q=lambda, df=1) # p-value under Neyman-Pearson binom.test(c(k, n-k)) # equivalent to Fisher - +1 for pointing out a great article I didn't know about. (Also for some much needed skepticism about the utility of Mathworld's view of statistics). –  conjugateprior Dec 1 '12 at 12:44 Thank you very much! So the p-value is \int_{x : f(x) <= k} f, where f is the PDF of a test statistic and k is the observed value of the statistic. Thank you again. –  JDL Dec 1 '12 at 13:01 Regarding the third answer, what is proved in your answer is unfairness of the coin because fairness assumption is rejected. On the contrary, to prove fairness of the coin by contradiction, I have to assume unfairness \theta \neq 0.5 and calculate p-value of my data. How can I do it? My point is the difficulty originated from the \neq sign of the unfairness assumption. Do I have to introduce some tolerance level for fairness, say 0.4 < \theta < 0.6, and calculate p-value in terms of \theta and integrate it over 0 < \theta < 0.4 and 0.6 < \theta < 1 ? –  JDL Dec 1 '12 at 13:02 One more question. This link explains "one-sided" p-value. It says one-sided p-value answers questions like "null hypothesis, that two populations really are the same ... what is the chance that randomly selected samples would have means as far apart as (or further than) observed in this experiment with the specified group having the larger mean?" Is it an appropriate use of one-sided p-value? I think the null hypothesis itself should be expressed as an inequality in this case (instead of equality and one-sided test). –  JDL Dec 1 '12 at 15:05 @Zag, I disagree rather with this answer: you don't have to think of the concept of extreme in terms of probability. Better to say that in this example the probability under the null is being used as the test statistic - but that's not mandatory. For example, if the likelihood ratio, as mentioned by whuber, is used as a test statistic, it will not in general put possible samples in the same order as will probability under the null. Other statistics are chosen for maximum power against a specified alternative, or all alternatives, or for high power against a vaguely defined set. –  Scortchi Dec 1 '12 at 16:20 (1) A statistic is a number you can calculate from a sample. It's used to put into order all the samples you might have got (under an assumed model, where coins don't land on their edges & what have you). If $t$ is what you calculate from the sample you actually got, & $T$ is the corresponding random variable, then the p-value is given by $\newcommand{\pr}{\mathrm{Pr}} \pr\left(T\geq t\right)$ under the null hypothesis, $H_0$. 'Greater than' vs 'more extreme' is unimportant in principle. For a two-sided test on a Normal mean we could use $\pr(|Z|\geq |z|)$ but it's convenient to use $2\min [\pr(Z\geq z),\pr(Z\leq z)]$ because we have the appropriate tables. (Note the doubling.) There's no requirement for the test statistic to put the samples in order of their probability under the null hypothesis. There are situations (like Zag's example) where any other way would seem perverse (without more information about what $r$ measures, what kinds of discrepancies with $H_0$ are of most interest, &c.), but often other criteria are used. So you could have a bimodal PDF for the test statistic & still test $H_0$ using the formula above. (2) Yes, they mean under $H_0$. (3) A null hypothesis like "The frequency of heads is not 0.5" is no use because you would never be able to reject it. It's a composite null including "the frequency of heads is 0.49999999", or as close as you like. Whether you think beforehand the coin's fair or not, you pick a useful null hypothesis that bears on the problem. Perhaps more useful after the experiment is to calculate a confidence interval for the frequency of heads that shows you either it's clearly not a fair coin, or it's close enough to fair, or you need to do more trials to find out. An illustration for (1): Suppose you're testing the fairness of a coin with 10 tosses. There are $2^{10}$ possible results. Here are three of them: $\mathsf{HHHHHHHHHH}\\ \mathsf{HTHTHTHTHT}\\ \mathsf{HHTHHHTTTH}$ You'll probably agree with me that the first two look a bit suspicious. Yet the probabilities under the null are equal: $\mathrm{Pr}(\mathsf{HHHHHHHHHH}) = \frac{1}{1024}\\ \mathrm{Pr}(\mathsf{HTHTHTHTHT}) = \frac{1}{1024}\\ \mathrm{Pr}(\mathsf{HHTHHHTTTH}) = \frac{1}{1024}$ To get anywhere you need to consider what types of alternative to the null you want to test. If you're prepared to assume independence of each toss under both null & alternative (& in real situations this often means working very hard to ensure experimental trials are independent), you can use the total count of heads as a test statistic without losing information. (Partitioning the sample space in this way is another important job that statistics do.) So you have a count between 0 and 10 t<-c(0:10) Its distribution under the null is p.null<-dbinom(t,10,0.5) Under the version of the alternative that best fits the data, if you see (say) 3 out of 10 heads the probability of heads is $\frac{3}{10}$, so p.alt<-dbinom(t,10,t/10) Take the ratio of the probability under the null to the probability under the alternative (called the likelihood ratio): lr<-p.alt/p.null Compare with plot(log(lr),p.null) So for this null, the two statistics order samples the same way. If you repeat with a null of 0.85 (i.e. testing that the long-run frequency of heads is 85%), they don't. p.null<-dbinom(t,10,0.85) plot(log(lr),p.null) To see why plot(t,p.alt) Some values of $t$ are less probable under the alternative, & the likelihood ratio test statistic takes this into account. NB this test statistic will not be extreme for $\mathsf{HTHTHTHTHT}$ And that's fine - every sample can be considered extreme from some point of view. You choose the test statistic according to what kind of discrepancy to the null you want to be able to detect. ... Continuing this train of thought, you can define a statistic that partitions the sample space differently to test the same null against the alternative that one coin toss influences the next one. Call the number of runs $r$, so that $\mathsf{HHTHHHTTTH}$ has $r=6$: $\mathsf{HH}\ \mathsf{T}\ \mathsf{HHH}\ \mathsf{TTT}\ \mathsf{H}$ The suspicious sequence $\mathsf{HTHTHTHTHT}$ has $r=10$. So does $\mathsf{THTHTHTHTH}$ while at the other extreme $\mathsf{HHHHHHHHHH}\\ \mathsf{TTTTTTTTTT}$ have $r=1$. Using probability under the null as the test statistic (the way you like) you can say that the p-value of the sample $\mathsf{HTHTHTHTHT}$ is therefore $\frac{4}{1024}=\frac{1}{256}$. What's worthy of note, comparing this test to the previous, is that even if you stick strictly to the ordering given by probability under the null, the way in which you define your test statistic to partition the sample space is dependent on consideration of alternatives. - You say that the definition Pr(T \ge t; H_0) can be applicable to any multimodal (of course, including bimodal) PDF of a test statistic. Then, you and Zag give different p-values for multimodal PDF of a test statistic. IMHO, Zag's definition is more resonable because the role of p-value is to quantify how likely (or weird) the observation is under the null hypothesis, as he pointed. What is your rationale for the definition Pr(T \ge t; H_0) ? –  JDL Dec 2 '12 at 14:41 @JDL, that just is the definition of a p-value. The question then becomes how to find a 'good' test statistic (& how to define 'good'). Sometimes the probability under the null (or any function of the data that gives the same ordering) is used as the test statistic. Sometimes there are good reasons to choose others, which fill up a lot of space in books on theoretical statistics. I think it's fair to say they involve explicit or implicit consideration of alternatives. ... –  Scortchi Dec 3 '12 at 14:25 @JDL, ... And if a particular observation has low probability under both null & alternative it seems reasonable not to regard it as extreme. –  Scortchi Dec 3 '12 at 14:27 Thank you for your answers, @Scortchi. I posted a new question and have seen your comments just now after the posting. Anyway, I'm still not clear about the definition. Thank you again for your kindly answers. –  JDL Dec 3 '12 at 14:47 I added an illustration –  Scortchi Dec 3 '12 at 15:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381672739982605, "perplexity": 524.0715975494113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00135-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.ijert.org/comparison-of-various-thresholding-techniques-of-image-denoising
Comparison of Various Thresholding Techniques of Image Denoising DOI : 10.17577/IJERTV2IS90812 Text Only Version Comparison of Various Thresholding Techniques of Image Denoising Shivani Mupparaju 1, B Naga Venkata Satya Durga Jahnavi 2 Department of ECE, VNR VJIET, Hyderabad, A.P, India Abstract Denoising using wavelets attempts to remove the noise present in the signal while signal characteristics are preserved, regardless of its frequency content. It can be handled using three steps: a linear forward wavelet transform, nonlinear thresholding step and a linear inverse wavelet transform. Wavelet denoising is a lot different from smoothing; smoothing is used to remove the high frequencies and retains the lower frequencies. Wavelet shrinkage is a non-linear process and it is used to distinguish from entire linear denoising technique. Wavelet shrinkage depends on the choice of a thresholding parameter and the choice of how the threshold is determined, and the efficacy of denoising various techniques can be used for choosing denoising parameters and so far there is no best universal threshold determination technique. So, Various denoising techniques such as Sure Shrink, Bayes Shrink and Visu Shrink determines the best one for image denoising. 1. Introduction During acquisition and transmission, image denoising can used to remove the additive noise while keeping the important signal features. In the recent years wavelet thresholding and threshold selection for signal de- noising has gain more interest because wavelet gives an appropriate basis for separating noisy signal from the image signal. The motivation is the wavelet transform is good at compacting energy, the small coefficient are more likely due to noise and large coefficient due to important signal features . These small coefficients can be thresholded without affecting the main features of the image. A simple non-linear technique called 'thresholding' which operates on one wavelet coefficient at a time. Each coefficient is thresholded by comparing against threshold. And if the coefficient is smaller than the threshold, set it to zero; otherwise it is modified. Replacing the small coefficients by zero and applying inverse wavelet transform on the result may lead to reconstruction retaining back the essential signal characteristics and giving less noise image. 2. Objectives and Tools Employed 1. Objective of the project The main objective of this paper is study various thresholding techniques such as Sure Shrink, Visu Shrink and Bayes Shrink and determine the best one for image denoising. 2. Tools Used Software: MATLAB 3. Types of Noise 1. Gaussian Noise Because of its mathematical tractability in both spatial and frequency domains, Gaussian (also called normal) noise models are used frequently in practice. In fact, this tractability is so convenient that it often results in Gaussian models being used in situations in which they are marginally applicable at best. The Probability density function of Gaussian random variable z, is given by: (1.1) where z represents intensity, is the mean (average) value of z , and is its standard deviation. The standard deviation squared , is called variance of z. 3.2 Rayleigh Noise The probability density function of Rayleigh Noise is given by P(z) (z-a) for z >= a = 0 for z < a 3.5. Uniform Noise The Probability density function is given by P(z) = if a <= z <= b = 0 otherwise (1.11) The mean and variance of this density are given by And (1.2) (1.3) The mean of this density function is given by And its variance by 4. Denoising (1.12) (1.13) 1. Erlang (Gamma) Noise (1.4) In many cases, additive noise is evenly distributed over the frequency domain (i.e., white noise), whereas an image contains mostly low frequency information. The noise is a characteristic at high frequencies and its effects can be reduced using low-pass filter. So a The probability density function of Erlang noise is given by P(z) = for z >= 0 = 0 for z <0 (1.5) Where the parameters are such that a > 0, b is a positive integer. The mean and variance of this density are given by (1.6) And (1.7) 2. Exponential Noise The Probability density function of exponential noise is given by P(z) = for z >= 0 = 0 for z < 0 (1.8) Where a > 0. The mean and variance of this density function are (1.9) And (1.10) frequency filter or with a spatial filter can also be used. Often a spatial filter is preferably used, as it is computationally less expensive than a frequency filter. Denoising can be done in various domains and by using various methods 1. Spatial domain 2. Frequency domain 3. Wavelet domain and 4. Curvelet domain 5. Thresholding 1. Motivation for Wavelet thresholding The plot of wavelet coefficients suggests that small coefficients are decreased due to noise, while coefficients with a large absolute value carry more signal information. Replacing noisy coefficients (small coefficients below a certain threshold value) by zero and an inverse wavelet transform may lead to a reconstruction that has lesser noise. Stated more precisely, we are motivated to this thresholding idea based on the following assumptions: 1. The deco-relating property of a wavelet transform creates a sparse signal: most coefficients, which are zero or close to zero when they are left untouched. 2. Noise is spread out equally along all coefficients. 3. The noise level is not too high so that we can distinguish the signal wavelet coefficients from the noisy ones. As it turns out, this method is indeed effective and thresholding is a simple and efficient method for noise reduction. Further, inserting zeros creates more sparsity in the wavelet domain and here we see a link between wavelet de-noising and compression. 2. Hard and soft thresholding: Hard and soft thresholding with threshold ¸ are defined as follows The hard thresholding operator is defined as D(U,) = U for all |U| >¸ = 0 otherwise (2.1) The soft thresholding operator can be defined as D(U,) = sgn(U)max(0,U|-) (2.2) Hard threshold is a keep or kill procedure and is more intuitively appealing. The transfer function of the same is shown. The alternative, soft thresholding (whose transfer function is shown), shrinks coefficients above the threshold in absolute value. While hard thresholding may seem goodl, the continuity of soft thresholding has some advantages. Moreover, hard thresholding does not even work with some algorithms such as the GCV procedure. At times, pure noise coefficients may pass the hard threshold and appear as annoying blips in the output. Soft thresholding shrinks these false structures. 3. Threshold determination Threshold determination is an important in image denoising. A small threshold gives a result close to the input, but the result can still have the noise component. Whereas a large threshold, produces a signal with a large number of zero coefficients. This results in smooth noiseless signal. The effect of smoothness, however, destroys details and in image processing may cause blur and artifacts. To investigate the effect of threshold selection, we performed wavelet denoising usin hard and soft thresholds on four signals popular in wavelet literature: Blocks and Doppler. The setup is as follows: 1. The original signals have length 2048. 2. We step through the thresholds from 0 to 5 with steps of 0.2 and at each step denoise the four noisy signals by both hard and soft thresholding with that threshold. 3. For each threshold, the MSE of the denoised signal is calculated. 4. Repeat the above steps for different orthogonal bases, namely, Haar, Daubechies 2,4 and 8. Fig:Table 1 The results are tabulated in the Table 1 and represents Blocks and Doppler of both hard and soft for different filters. 4. Comparison with Universal threshold: The threshold UNIV = 2ln N (N being the signal length, 2being the noise variance) is well known in wavelet literature as the Universal threshold. It is the optimum threshold and minimizes the cost function of the difference between the function and the soft threshold version of the same in the L2.In our case, N=2048, = 1, therefore theoretically, 2ln(2048)(1) 3.905 As seen from the table, the best empirical thresholds for both hard and soft thresholding are much lower than this value, independent of the wavelet used. It therefore seems that the universal threshold is not useful to determine a threshold. However, it is useful for obtain a starting value when nothing is known of the signal condition. One can surmise that the universal threshold may give a better estimate for the soft threshold if the numbers of samples are larger. 5. Image Denoising using Thresholding: An image is often corrupted by noise in its acquisition or transmission. The underlying concept of denoising in images is similar to the 1D case. The goal is to remove the noise while retaining the important signal features as much as possible. The noisy image is represented as a two-dimensional matrix {xij},i ,j=1,.N. The noisy version of the image is modeled as yij=xij+nij ij=1,.N. (2.3) Where {nij} are iid as N (0, 2). We can use the same principles of thresholding and shrinkage to achieve denoising as in 1-D signals. The problem again boils down to finding an optimal threshold such that the mean squared error between the signal and its estimate is minimized. The wavelet decomposition of an image is done as follows: In the first level of decomposition, the image is split into 4 sub bands, namely the HH, HL, LH and LL sub bands. The HH sub band gives the diagonal details of the image the HL sub band gives the horizontal features while the LH sub band represents the vertical structures. The LL sub band is the low- resolution residual consisting of low frequency components and it is this sub band, which is further split at higher levels of decomposition. The different methods for denoising we investigate differ only in the selection of the threshold. The basic procedure remains the same: 1. Calculate the DWT of the image. 2. Threshold the wavelet coefficients. 3. Compute the IDWT to get the denoised estimate. Moreover, it is also found to yield visually more pleasing images. Hard thresholding introduces artifacts in the recovered images. The three thresholding techniques- Visu Shrink, Sure Shrink and Bayes Shrink and investigate their performance for denoising various standard images 1. VisuShrink: Visu Shrink was introduced by Donoho . It can be defined as 2log I where is the noise variance and I is the number of pixels in the image.The maximum of any I values can be given by N(0,2) with the probability approaching 1 as number of pixels in the given image increases. Therefore if it has high probability, a pure noise signal is calculated as being identically zero. However, for denoising images, Visu shrink is found to yield an overly smoothed estimate as seen. This is because the universal threshold (UT) is derived under the constraint that with high probability the estimate should be at least as smooth as the signal. The universal threshold is high for large values of I, killing many signal coefficients along with the noise. Thus, the threshold doesnt perform well at discontinuities in the signal. 2. Sure Shrink: Let = (i : i = 1,. d) be a length-d vector, and let x = {xi} (with xi distributed as N(¹i,1)) be multivariate normal observations with mean vector . Let = be a fixed estimate based on the observations x. SURE can be defined as Steins unbiased Risk Estimator). It is a special method for estimating the loss in an unbiased fashion is the soft threshold estimator. Steins result to get an unbiased estimate of the risk is applied. For an observed vector x is the set of noisy wavelet coefficients in a sub band we find out the threshold ts that minimizes SURE (t,x) ,i.e. (2.4) The above optimization problem is computationally straightforward. Without loss of generality, we can reorder x in order of increasing . Then on intervals of t that lie between two values of , SURE (t) is strictly increasing. Therefore the minimum value of ts is 3. Threshold Selection in Sparse Cases :The drawback of SURE in situations of extreme sparsity of the wavelet coefficients. In such cases the noise contributed to the SURE profile by the many coordinates at which the signal is zero swamps the information contributed to the SURE profile by the few coordinates where the signal is nonzero. Consequently, d d Sure Shrink uses a Hybrid scheme. The idea behind this hybrid scheme is that the losses while using an universal threshold, tend to be larger than SURE for dense situations, but much smaller for sparse cases .So the threshold is set to tF in dense situations and to tS in sparse situations. Thus the (4.12) Where and (2.5) being the thresholding operator. estimator in the hybrid method works as follows 4. SURE applied to image denoising: The wavelet decomposition of the noisy image is obtained. The SURE threshold is determined for each sub band using the above equations. We choose between this threshold and the universal threshold using the equation .The expressions and in the equations, given for = 1 have to suitably modified according to the noise variance and the variance of the coefficients in the sub band. The results obtained for the image lina (512*512pixels) using Sure Shrink are shown in results. The `Db4' wavelet was used with 4 levels of decomposition. Clearly, the results are much better than Visu Shrink. The sharp features of the image are retained and the MSE is considerably lower. This is because Sure Shrink is sub band adaptive- a separate threshold is computed for each detail sub band. 5. Bayes Shrink: In Bayes Shrink we determine the threshold for each subband assuming a Generalized Gaussian Distribution (GGD) . The GGD is given by x, x x x, x x GG (x) C( , ) exp[( , ) | x |] one of the data values there are only d values and the threshold can be obtained. x , 0 (2.6) Where (3 (3 1 2 x X x X , 1 (1 ) 6. Parameter Estimation to determine the Threshold: The GGD parameters, x and , need to be estimated to compute TB (x) . The noise variance 2 is estimated and C x , . x , 21 (2.7) (2.8) from the subband HH1 by the robust median estimator , median | Yij | 0.6745 and (t) euut 1du (2.9) Yij subbandHH1 (2.11) 0 Y X Y X The parameter x is the standard deviation and is the shape parameter It has been observed that with a shape parameter ranging from 0.5 to 1, we can describe the distribution of coefficients in a sub band fr a large set of natural images. Assuming such a distribution for the The parameter does not explicitly enter into the expression of TB (x). Therefore it suffices to estimate directly the signal standard deviation x. The observation gives Y = X + V, with X and V as independent of each other, hence wavelet coefficients, we empirically estimate and x for each sub band and try to find the threshold T which minimizes the Bayesian Risk, i.e., the expected value of the mean square error. 2 2 2 (2.12) x y x x y x T E( X X )2 E E X X 2 Where 2 is the variance of Y. Since Y is modeled Y Y Y Y as zero-mean, 2 can be found empirically by Where X = T(Y ); Y/X ~ N(x,2) and X ~ G Gx,. 1 n 2 Y 2 Y Y The optimal threshold T* is then given by n i, j1 i, j T*(x, ¯) = arg min (T) (4.20) (2.13) It is a function of the parameters x and . Since there is no closed form solution for T*, numerical calculation is Where nXn is the size of the subband under consideration. X X 2 used to find its value. It is observed that the threshold value set by 2 Thus TB (x ) (2.14) TB ( x ) (2.10) Where x 2 max 2 2 , 0 (2.15) is very close to T*. The estimated threshold TB = =x is not only nearly optimal but also has an intuitive appeal. The normalized threshold, TB/. is inversely proportional to x, the standard deviation of X, and proportional to x, the noise standard deviation. When /x<<1, the signal is much stronger than the noise, Tb/ x Y Y Y x x In the case that 2 > 2 , B x B x T ( ) is , or, in practice, is taken to be zero, i.e, B x B x T ( ) = max| Yij | is chosen to be small in order to preserve most of the signal and remove some of the noise; when = x>>1, the noise dominates and the normalized threshold is chosen to be large to remove the noise which has and all coefficients are set to zero. To summarize, Bayes Shrink performs soft thresholding, sub band dependent threshold, X X 2 exacted the signal. Thus, this threshold choice adapts to both the signal and the noise characteristics as reflected in the parameters and x.. TB (x ) (2.16) The reconstruction using Bayes Shrink is smoother and more visually appealing than the one obtained using Sure Shrink. This not only validates the approximation of the wavelet Coefficients to the GGD but also justifies the approximation to the threshold to a value independent of . 2.17) 2.17) 7. Image Reconstruction: Image reconstruction can be carried out by the following procedure: 1. Lets sample the data by a factor of two on all the four sub bands at the coarsest scale 2. Filter the sub bands in each dimension. 3. Sum the four filtered sub bands to reach the low- low sub band at the next finer scale. This above process can be repeated until the image is fully reconstructed. r2 F(r, )drd[ r /(2 )] ( Figure 2: Noisy Image 0 2 0 xp(|z z |)1J (r )J (r )d. j i 1 2 0 i j i 1 2 0 i e 0 1. Results Figure 8: SNR of SURE shrink 2. Conclusion We have seen that wavelet thresholding is an effective method of denoising noisy signals.Then we investigated many soft and hard thresholding schemes using Visu Shrink, Sure Shrink and Bayes Shrink for denoising images. It was found that sub band adaptive thresholding performs better than a universal thresholding. Among these, Bayes Shrink gave the best results. This validates the assumption that the Generalized Gaussian Distribution (GGD) is a very good model for the wavelet coefficient distribution in a sub band. An important point to be noted is that although Sure Shrink performed worse than Bayes Shrink, it adapts well to sharp discontinuities in the signal. This was not evident in the natural images that were used for testing purpose specially while comparing the performance of these algorithms on artificial images with discontinuities (such as medical images). Image denoising using thresholding by wavelets provides an extension for research into fast and robust multi-frame super resolution, exemplar-based inpainting and image feature extraction. Wavelet transform can also be used in the analysis and synthesis of multi-scale models of stochastic processes. The concept of wavelet transform finds an important application in speech coding, communications, radar, sonar, denoising, edge detection and feature detection 3. References 1. Digital Image Processing, 3rd edition, by Rafael C.Gonzalez, Richard E.Woods ,Pearson Publications 2. Digital Image Processing using MATLAB, 2nd edition, by Rafael C.Gonzalez, 3. Martin Vetterli , S Grace Chang, Bin Yu. Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions on Image Processing, 9(9):15321546, Sep 2000. 4. David L Donoho. De-noising by soft thresholding. IEEE Transactions on Information Theory, 41(3):613 627, May 1995. 5. Iain M.Johnstone David L Donoho. Adapting to smoothness via wavelet shrinkage. Journal of the Statistical Association, 90(432):12001224, Dec 1995. 6. David L Donoho. Ideal spatial adaptation by wavelet shrinkage . Biometrika, 81(3):425455,August 1994. 7. Maarten Jansen. Noise Reduction by Wavelet Thresholding ,volume 161. Springer Verlag, United States of America, 1 edition, 2001. 8. Carl Taswell. The what, how and why of wavelet shrinkage denoising. Computing in Science and Engineering , pages 1219, May/June 2000. 9. Sachin D Ruikar and Dharmpal D Doye . Wavelet based image denoising technique . (IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 2, No.3, March 2011
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837519884109497, "perplexity": 1515.2991473903314}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00016.warc.gz"}
https://www.physicsforums.com/threads/inductive-proof-4-n-n-4.547576/
# Inductive proof 4^n >/ n^4 1. Nov 5, 2011 ### IntroAnalysis 1. The problem statement, all variables and given/known data n is an element of the Natural numbers, and n $\geq5$, then 4^n $\geq(n^4)$ 2. Relevant equations Base case: n=5, then 4^5=1024 >or= 5^4=625 as required. 3. The attempt at a solution Inductive step, assume k is an element of Natural numbers and 4^k > or = k^4. Then we must show 4^(k+1) > or = (K+1)^4 What's the trick to showing this? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Nov 5, 2011 ### SammyS Staff Emeritus What have you tried? 3. Nov 5, 2011 ### IntroAnalysis I know that 4^(k + 1) =4(4^k) which helps since we know 4^k >/ k^4, But (k + 1)^4 = k^4 + 4k^3 + 6k^2 + 2k + 1, and I don't know how to break this up to show 4(4^k) >/ k^4 + 4k^3 + 6k^2 + 2k + 1 Any hints? 4. Nov 5, 2011 ### shaon0 RTP: 4n>n4 for all n≥5. I'll skip the trivial n=5 case. S(k): Assume; 4k>k4 for all k≥5. S(k+1): RTP: 4k+1>(k+1)4 LHS=4k+1=4k*4>k4*4=4k4 Now consider; y=4x^4-(x+1)^4; Differentiate this and see what you can deduce for x≥5. Remember to restrict y after this to use on the induction. In restricting, change the domain and co-domain to an integer field. Last edited: Nov 5, 2011 5. Nov 5, 2011 ### SammyS Staff Emeritus shaon0, Do you realize that you are not supposed to give complete solutions? 6. Nov 5, 2011 ### shaon0 Yeah, sorry. Uhm, just told by a mod. Deleting the post. Sorry. He's not online, so hopefully he hasn't seen it. I've deleted it and left a hint. 7. Nov 5, 2011 ### IntroAnalysis Actually, I think I found an easier solution. Can someone comment? We previously proved that for all n >/5, 2^n> n^2. We proved 2^(n+1) > (n+1)^2. Now since we know that 2 > 0 and n + 1 > 0 (since n >/5), we know that we can square both sides and the inequality still holds so: (2^(n + 1))^2 = 2^(2n + 2) = 2^[2*(n+1)] =4^(n+1) > ((n+1)^2)^2 = (n+1)^4 Hence, we have 4^(n+1) > (n+1)^4 which is what we wanted to prove. 8. Nov 5, 2011 ### shaon0 Yes, that's true but how would you have proven 2^n>n^2? As an extension; k^n>n^k for n,kEZ+ 9. Nov 7, 2011 ### IntroAnalysis Base case n = 5: 2^n (= 32) > n^2 (=25) is true as required. Now we assume that the statement is true for some n an element of Natural numbers, n$\geq5,$, and show that it must be true for n + 1. We know n2< 2n and therefore 2n2<2n+1. We are now going to use that 2n + 1 < n2 which is true for all n$\geq5$: 2n + 1 < n2 implies n2+2n +1 < 2n2< 2n+1 and therefore (n + 1)2< 2n+1. We are therefore done if we can demonstrate that 2n +1 < n2 for all n $\geq5$. This inequality is equivalent to 2 < (n - 1)2. The right side is strictly increasing function of n for n > 1, thus (n - 1)2>(5 - 1)2= 16 > 2 for all n $\geq5$. This completes the proof of the original inequality. ---------------------------------------- By the way, what does RTP stand for? 10. Nov 7, 2011 ### IntroAnalysis Now consider; y=4x^4-(x+1)^4; Differentiate this and see what you can deduce for x≥5. Remember to restrict y after this to use on the induction. In restricting, change the domain and co-domain to an integer field. So y = 4x4 - (x+1)4 = 4x4 - (x4 + 4x3 + 6x2 + 4x +1) Then dy/dx = 12x3 -12x2 -12x -4 = 12x(x2-x -1) - 4 For x ≥ 5, (actually for x ≥ 2), dy/dx is positive. Thus, 4(x+1)must be greater than (x + 1)4. Is that it? Similar Discussions: Inductive proof 4^n >/ n^4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244725108146667, "perplexity": 2290.090798754135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804976.22/warc/CC-MAIN-20171118151819-20171118171819-00301.warc.gz"}
http://mathhelpforum.com/geometry/43977-3d-vector-geometry.html
## 3D Vector Geometry My first post, so would be very grateful if anyone could help... I have a problem concerning the calculation of intercept points of lines on the surface of a cube. Firstly consider a set of parallel lines that are heading towards a cube and travelling in the direction parallel to the x axis. The y and z separations of the intercept points on the y-z plane (x=0) of the cube are given by s. I now need to consider the same set of lines, but travelling in an arbitrary direction and intercepting on any face of the cube. For a given intercept point, I need to find the (dx,dy,dz) distance, on the surface of the cube, to the intercepts of the adjacent lines. Does anyone have any suggestions?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375634551048279, "perplexity": 249.40691889743866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00158-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quantum-something-or-other.233704/
# Quantum something or other 1. May 7, 2008 ### Dragonfall Suppose $$\mathbf{U}_f(\left| x\right>\left| y\right> )=\left| x\right>\left| y\oplus f(x)\right>$$ denotes the unitary transformation corresponding to some 1-bit function f. I'm guessing here that x is the input register, and y the output register. Now suppose f(0)=0 and f(1)=0. How is it that $$\mathbf{U}_f=\mathbf{1}$$ the 2-Qbit unit operator? Last edited: May 7, 2008 2. May 7, 2008 ### Dragonfall In general, is $$\left| x\right>\left| y\right>$$ a tensor product? 3. May 10, 2008 ### mrandersdk don't know about your first question but in general $$\left| x\right>\left| y\right>$$ means tensor product, we are just too lazy to write it. 4. May 10, 2008 ### HallsofIvy It might be better to post this under "quantum mechanics".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542967677116394, "perplexity": 4984.835015981558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590711.36/warc/CC-MAIN-20180719070814-20180719090814-00403.warc.gz"}
https://www.physicsforums.com/threads/word-problem-using-derivatives-struggling-with-it.275575/
# Word problem using derivatives - struggling with it 1. Nov 28, 2008 ### meredith 1. The problem statement, all variables and given/known data the equation is PV = c; p = pressure, v = volume, c=constant. (also known as Boyle's law) the question: if volume is decreasing at a rate of 10cm^3/minute, how fast is the pressure increasing when the pressure is 100g/cm^2 and volume is 20 cm^3 2. Relevant equations none 3. The attempt at a solution dv/dt = -10cm^3/min dp/dt = ? equation: PV=c derivative: dp/dt x dv/dt = 0 dp/dt = -dv/dt but i know what im doing isnt right. can anyone help me? THANKS! 2. Nov 28, 2008 ### Staff: Mentor The above isn't right. It is not true that d/dt(PV) = dP/dt * dV/dt. You need to use the product rule. After you differentiate PV, solve algebraically for dP/dt, and then substitute the values you have. Similar Discussions: Word problem using derivatives - struggling with it
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720760345458984, "perplexity": 2651.1007866763293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612553.95/warc/CC-MAIN-20170529203855-20170529223855-00104.warc.gz"}
http://mathhelpforum.com/math-topics/66092-chemistry-question-what-pressure.html
# Math Help - (Chemistry Question) What is pressure? 1. ## (Chemistry Question) What is pressure? Is pressure the velocity of the particles when they collide in their environment? (self-made definiton to see if I understand) 2. Originally Posted by s3a Is pressure the velocity of the particles when they collide in their environment? (self-made definiton to see if I understand) Pressure is the average force per unit area due to the kinetic energy of the particles causing them to collide off the boundaries of their environment. The more kinetic energy, the more collisions per unit time, and the more forceful the collisions are. 3. yes in physics you learn that $P= F/A$ or pressure equals force divided by area and kinetic energy which is a force $= .5 mv^2.$ 4. Originally Posted by fredricktator [snip] and kinetic energy which is a force $= .5 mv^2.$ Kinetic energy is NOT a force. 5. LOL More specifically, kinetic energy is a nonnegative scalar quantity. Conversely, force is a vector, with the direction of the acceleration vector and norm of the acceleration vector scaled by mass. You can find the kinetic energy by integrating force over a path...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948925793170929, "perplexity": 778.8657301708477}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162938.42/warc/CC-MAIN-20160205193922-00248-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/23056-derivative-proof.html
1. ## Derivative proof Problem. Let $g(x)=x^{2}sin( \frac {1}{x} )$, find g'(0) and prove that g'(x) is not continuous at x=0. My work: $g'(0) = \lim _{x \rightarrow 0 } \frac {g(x)-g(0)}{x-0} = \lim _{x \rightarrow 0 } \frac {x^{2}sin( \frac {1}{x} )-g(0)}{x}$, but g(0) is undefined, so how do you find the derivative? Thanks. 2. No, it is defined g(0)=0. 3. Well, then, I have $\lim _{x \rightarrow 0 } \frac {x^{2}sin ( \frac {1}{x} )}{x} = \lim _{x \rightarrow 0 } xsin( \frac {1}{x} )$ So g'(0) = 0? 4. Yes because $\left| x\sin \frac{1}{x} \right| \leq |x|$ now use squeeze theorem. 5. Now to prove g'(x) is not continuous at x = 0. Pick a sequence $\{ x_{n} \}$ that converges to 0. Consider $\lim _{n \rightarrow \infty } | g'( x_{n}) - 0 | = \lim _{n \rightarrow \infty } | \lim _{x \rightarrow x_{n}} \frac {g(x) - g(x_{n})}{x-x_{n}} |$ I think I'm stuck... 6. To show that $\sin \frac{1}{x}$ is not continous at zero (for example) a sequene say $x_n = \frac{1}{\pi n}$ which is zero. And pick another sequence like $x_n = \frac{1}{\frac{\pi}{2}+\pi n}$ which is 1. So it is not continous.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938575625419617, "perplexity": 818.7133310084818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.geektonight.com/boolean-algebra-and-logic-gates/
# Boolean Algebra and Logic Gates ## What is Boolean Algebra? In 1854, George Boole, a 19th century English Mathematician has invented Boolean/ Logical/ Binary algebra by which reasoning can be expressed mathematically. Boolean algebra is one of the branches of algebra which performs operations using variables that can take the values of binary numbers i.e., 0 (OFF/False) or 1 (ON/True) to analyze, simplify and represent the logical levels of the digital/ logical circuits. 0<1, i.e., the logical symbol 1 is greater than the logical symbol 0. ### Boolean Algebra Formula Following are the operations of Boolean algebra: #### OR Operation The symbol ‘+’ denotes the OR operator. To perform this operation we need a minimum of 2 input variables that can take the values of binary numbers i.e., 0 or 1 to get an output with one binary value (0/1). OR operation is defined for A OR B or A+B as if A = B = 0 then A+B = 0 or else A+B = 1. The result of an OR operation is equal to the input variable with the greatest value. Following are the possible outputs with a minimum of 2 input combinations: • 0 + 0 = 0 • 0 + 1 =1 • 1 + 0 = 1 • 1 + 1 = 1 #### AND Operation The symbol ‘.’ denotes the AND operator. To perform this operation we need a minimum of 2 input variables that can take the values of binary numbers i.e., 0 or 1 to get an output with one binary value (0/1). And operation is defined for A AND B or A.B, if A = B = 1 then A.B = 1 or else A.B = 0. The result of an AND operation is equal to the input variable with the lowest value. Following are the possible outputs with a minimum of 2 input combinations: • 0 .0= 0 • 0 .1 = 0 • 1 .0 = 0 • 1 .1 = 1 #### Not Operation Not operation is also known as Complement Operation. This is a special operation which is denoted by ‘’. Complement of A is represented as A’. To perform this operation we need a minimum of 1 input variable that can take the values of binary numbers i.e., 0 or 1 to get an output with one binary value (0/1). Not operation is defined for A’ or NOT A if A = 1, then A’ = 0 or else A’ = 1. The result of a not operation is the inverse value of the set of inputs provided. Following are the possible outputs with minimum 1 input value: • (1)’ = 0 • (0)’ = 1 ### Boolean Algebra Rules The Following are the important rules followed in Boolean algebra. • Input variables used in Boolean algebra can take the values of binary numbers i.e., 0 or 1. Binary number 1 is for HIGH and Binary 0 is for LOW. • The complement/negation/inverse of a variable is represented by Thus, the complement of variable A is represented as A’. Thus A’ if A = 1, then A’ = 0 or else A’ = 1 • OR-ing of the variables is represented by a ‘+’ sign between them. For example, OR-ing of A, B is represented as A + B. • Logical AND-ing of two or more variables is represented by a ‘.’ sign between them, such as A.B. Sometime the ‘.’ may be omitted like AB. ### Boolean Laws #### Commutative law Binary operation(s) satisfying anyone of the following expressions is/are said to a commutative operation. • A.B = B.A • A+B = B+A Commutative law states that changing the sequence of the variables does not have any effect on the output of a logic circuit. #### Associative law Law of Associative states that the order in which the logic operations are performed is irrelevant as their effect is the same. • (A.B).C = A.(B.C) • (A+B)+C = A+(B+C) #### Distributive law Law of Distributive states the following conditions 1. A.(B+C) = A.B + A.C 2. A+(B.C) = (A+B).(A+C) #### AND law AND Law states the following conditions as they are using AND Operations. A.0 = 0 A.1 = A A.A = A A.A’ = 0 #### OR law OR laws states the following conditions as they are using OR Operations. A+0 = A A+A = A A+1 = 1 A+A’ = A ### Complement law This law uses the NOT operation. The law of Complement also known as Inversion/Negation, states that double inversion of a variable result in the original variable itself. • (A’)’ = A or A + (A’)’ = 1 ## Logic Gates Digital systems are said to be built using Logic Gates. A Logic gate is an electronic circuit or logic circuit which can take one or more than one input to get only one output. A particular logic is the relationship between the inputs and the output of a logic gate. ## Types of Logic gates ### AND Gate This logic gate uses AND operation logic and denoted by ### OR Gate This logic gate uses OR operation logic and denoted by ### NOT Gate This logic gate uses NOT operation logic & denoted by It is also known as an Inverter. ### NAND Gate A NOT-AND operation is known as NAND operation, and a logic gate using this NAND operation logic is called NAND gate. Here the output of AND gate is the input of the NOT gate and the output of this combination of NOT gate and AND gate is the output of the NAND gate. NAND Gate Diagram ### NOR Gate A NOT-OR operation is known as NOR operation, and a logic gate using this NOR operation logic is called the NOR gate. Here the output of the OR gate is the input of the NOT gate and the output of this combination of NOT gate and OR gate is the output of the NOR gate. NOR Gate Diagram ### XOR Gate XOR or EXOR or Exclusive-OR is a special type of gate or circuit which will give high output if even or zero number of inputs are high or else it will give low output. The algebraic expressions   and  both represent the XOR gate with inputs A and B. The Operation of this gate is denoted by XOR Gate Logic D = A XOR B D = A ⊕ B D = A’.B + A.B’ ### XNOR Gate XNOR or EX-NOR or Exclusive NOR gate is a special type of gate or circuit that will give high output if an odd number of inputs are high or else it will give low output. It is the opposite of the XOR gate. The Operation of this gate is denoted by ‘Ɵ’. D = A Ɵ B D = A’.B’ + A.B
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623766899108887, "perplexity": 1405.050575996785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00368.warc.gz"}
https://www.physicsforums.com/threads/awkard-question-with-logarithms.124471/
# Homework Help: Awkard question with logarithms 1. Jun 23, 2006 ### Byrgg $b^(log_b x) = x$ a while back, and recieved help in understanding multiple ways to prove it. But I also asked a math teacher prior to requesting help here, he suggested 2 methods: Substitution(also suggested by my physics teacher): Let $log_b x = y$, which is the same as writing $b^y = x$, right? but we already said y = $log_b x = y$, therefore, by substituting we get: $b^(log_b x) = x$ That method was simple enough, I was also shown by the people here that by applying that $log_b x$ and $b^x$ were inverses, you could achieve the same final result: let $log_b x = f(x)$, and $b^x = g(x)$, by composing the inverses you get: g(f(x)) = x = $b^f^(x)$ = $b^(log_b x)$ And after much work, I finally understood this method as well. But another way the math teacher suggested, was what he called the 'intuitive' way, simply by 'looking' at the problem, you could see the relationship to be true... my guess is that he meant you could see the first way mentioned here simply by 'looking'. Does anyone have an idea of what he may have been suggesting? Thanks in advance. Last edited: Jun 23, 2006 2. Jun 23, 2006 ### Byrgg Woops sorry, the imaging didn't work out so well, look at the equation I was trying to prove as b^(log_b x) = x. 3. Jun 23, 2006 ### Byrgg Argh, the message editor isn't working well, also, when I composed the inverses read as b^(f(x)) (with f(x) being entierely in the exponent). 4. Jun 23, 2006 ### Byrgg I'll try and clear up any misconceptions here with my poor latex skills, just to go over everything... I'm trying to prove x = b^(log_b x), which I know how to do, but I'm just curious as to what the 'intuitive' way mentioned by the math teacher is... When I composed the inverses, it should be read as: g(f(x)) = b^(f(x)) = b^(log_b x) 5. Jun 23, 2006 ### nrqed I guess you meant $b^{log_b (x)} = x$ (you must use curly braces). By definition, what does log_b(x) represent? Using only words? It represents the exponent to which b must be raised to give x!!! Right? Therefore if you calculate $b^{log_b(x)}$ you of course get back x! You see what I mean? patrick 6. Jun 23, 2006 ### arunbg I think you mean $$b^{log_b(x)}=x$$ Think about the basic definition of logarithm. How are logarithmic and exponential functions related ? Edit: Lately my typing has been on the slower side. 7. Jun 23, 2006 ### nrqed :rofl: :rofl: It looks as if we are "chasing" each other around the boards!! :tongue2: And we are posting the same comments! (hey, maybe we are twins from different universes, "a la Many World Interpretation" of QM! I am logging off now so I won't annoy you by "cutting" you off! Best regards Patrick 8. Jun 23, 2006 ### Byrgg Yes, thanks for providing that code reference for me, $$b^{log_b(x)}=x$$ indeed that is what I was trying to write. Ok, so let's see here, the exponent to which must be raised to reach x, this is the same as log_b(x), right? So by rasing b to this exponent, you must get x? Right? Wow, I feel like an idiot now >_>, I'm pretty sure that's exactly what the math teacher said too, I've gotta pay more attention, it's hard when you're stressing about exams and the like though, thanks for your help you two. If my little summary there was right, then I don't think there's any more further questions. 9. Jun 23, 2006 ### Byrgg Oh, also I guess it's worth bringing back(from my old thread), about log_b (x) and b^x being inverses... It was said that the equation $$b^{log_b(x)}=x$$ was precisely saying log_b (x) and b^x were inverses... You would prove that statement by using the method I showed before right? Or is there something else I forgot? Here's the method... let y = $log_b (x)$, and y = $b^x$ by composing the inverses you get: g(f(x)) = x = $b^{log_b(x)}$ Right? Basically, I'm curious as to why exactly you say they are inverses in the first place, is it because of the definition? 10. Jun 23, 2006 ### 0rthodontist You can just take the log base b of both sides. That will save you the trouble of thinking about it, unless you're trying to work from first principles. 11. Jun 23, 2006 ### Byrgg Yeah I already know that way, and yes, I was trying to work from first principles, sort of... Was I right, or does something need correcting? 12. Jun 23, 2006 ### Hootenanny Staff Emeritus Basically yes, they are inverses because they are defined as such. This is like asking why is arcsine the inverse of sine. 13. Jun 23, 2006 ### Byrgg Ok, and from that knowledge, you can use the method I showed(and learned here in the first place) to prove the equation(composing the inverses)? 14. Jun 24, 2006 ### arunbg Yes Byrgg, I think you have got it . Also it is worthwhile to note that $$log_b(b^x)=x$$ This highlights the property of invertible functions. 15. Jun 24, 2006 ### Byrgg Yeah I noticed that too, and I guess it's also worth mentioning how easy it is to see the compostion of inverses here... b^x = f(x) log_b (x) = g(x) so obviously those two relationships simply show the subbing of inverses into each other, which also equals x, right? 16. Jun 24, 2006 ### Hootenanny Staff Emeritus Yes, they are both functions of x and inverses of each other, so if one forms a composite function, either fg(x) or gf(x), then the result in this case simply x. 17. Jun 24, 2006 ### nrqed Yes. If we use the notation $f^{-1}$ for the inverse of a function, then, by definition, $$f ( f^{-1}(x)) = f^{-1}(f(x)) = x$$ It's actually a bit tricky because one must be very careful about the domains and images of the function and inverse functions and that may mess up things sometimes (for example, $\ln(e^{-2}) =-2$ but $e^{\ln(-2)}$ is clearly not defined. Or taking an inverse sin has multiple solutions). 18. Jun 24, 2006 ### Byrgg I don't know, the one thing I think that still bugs me is exactly how you can look at $b^{log_b (x)} = x$ and simply automatically say that $log_b$ and $b^x$ are inverses... Since you can't really tell what the exponent of b was prior to substitution... 19. Jun 24, 2006 ### arildno An exponential function is a strictly increasing function. With that, you may prove that there exists an inverse function. 20. Jun 24, 2006 ### Hootenanny Staff Emeritus Let me ask you a question how can you just look at ArcCos(Cos(x)) = x and simply automatically say that ArcCos and Cos are inverses?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839594125747681, "perplexity": 1437.7035272973621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945459.17/warc/CC-MAIN-20180421223015-20180422003015-00179.warc.gz"}
https://deepai.org/publication/synthesizing-robust-plans-under-incomplete-domain-models
# Synthesizing Robust Plans under Incomplete Domain Models Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to generate plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness, and formalize the notion of plan robustness with respect to an incomplete domain model. We then propose an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem. We present experimental results with Probabilistic-FF, a state-of-the-art planner, showing the promise of our approach. ## Authors • 4 publications • 66 publications • 7 publications 07/28/2012 ### Model-Lite Case-Based Planning There is increasing awareness in the planning community that depending o... 03/09/2000 ### Planning with Incomplete Information Planning is a natural domain of application for frameworks of reasoning ... 03/06/2013 ### A Method for Planning Given Uncertain and Incomplete Information This paper describes ongoing research into planning in an uncertain envi... 11/18/2015 ### Discovering Underlying Plans Based on Distributed Representations of Actions Plan recognition aims to discover target plans (i.e., sequences of actio... 08/27/2014 ### Knowledge Engineering for Planning-Based Hypothesis Generation In this paper, we address the knowledge engineering problems for hypothe... 11/03/2020 ### Provenance-Based Assessment of Plans in Context Many real-world planning domains involve diverse information sources, ex... 04/16/2018 ### Heuristic Approaches for Goal Recognition in Incomplete Domain Models Recent approaches to goal recognition have progressively relaxed the ass... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction In the past several years, significant strides have been made in scaling up plan synthesis techniques. We now have technology to routinely generate plans with hundreds of actions. All this work, however, makes a crucial assumption–that a complete model of the domain is specified in advance. While there are domains where knowledge-engineering such detailed models is necessary and feasible (e.g., mission planning domains in NASA and factory-floor planning), it is increasingly recognized (c.f. [Hoffmann, Weber, and Kraft2010, Kambhampati2007]) that there are also many scenarios where insistence on correct and complete models renders the current planning technology unusable. What we need to handle such cases is a planning technology that can get by with partially specified domain models, and yet generate plans that are “robust” in the sense that they are likely to execute successfully in the real world. This paper addresses the problem of formalizing the notion of plan robustness with respect to an incomplete domain model, and connects the problem of generating a robust plan under such model to conformant probabilistic planning [Kushmerick, Hanks, and Weld1995, Hyafil and Bacchus2003, Bryce, Kambhampati, and Smith2006, Domshlak and Hoffmann2007]. Following Garland & Lesh garland2002plan, we shall assume that although the domain modelers cannot provide complete models, often they are able to provide annotations on the partial model circumscribing the places where it is incomplete. In our framework, these annotations consist of allowing actions to have possible preconditions and effects (in addition to the standard necessary preconditions and effects). As an example, consider a variation of the Gripper domain, a well-known planning benchmark domain. The robot has one gripper that can be used to pick up balls, which are of two types light and heavy, from one room and move them to another room. The modeler suspects that the gripper may have an internal problem, but this cannot be confirmed until the robot actually executes the plan. If it actually has the problem, the execution of the pick-up action succeeds only with balls that are not heavy, but if it has no problem, it can always pickup all types of balls. The modeler can express this partial knowledge about the domain by annotating the action with a statement representing the possible precondition that balls should be light. Incomplete domain models with such possible preconditions/effects implicitly define an exponential set of complete domain models, with the semantics that the real domain model is guaranteed to be one of these. The robustness of a plan can now be formalized in terms of the cumulative probability mass of the complete domain models under which it succeeds. We propose an approach that compiles the problem of finding robust plans into the conformant probabilistic planning problem. We present experimental results showing scenarios where the approach works well, and also discuss aspects of the compilation that cause scalability issues. ## 2 Related Work Although there has been some work on reducing the “faults” in plan execution (e.g. the work on k-fault plans for non-deterministic planning [Jensen, Veloso, and Bryant2004]), it is based in the context of stochastic/non-deterministic actions rather than incompletely specified ones. The semantics of the possible preconditions/effects in our incomplete domain models differ fundamentally from non-deterministic and stochastic effects. Executing different instances of the same pick-up action in the Gripper example above would either all fail or all succeed, since there is no uncertainty but the information is unknown at the time the model is built. In contrast, if the pick-up action’s effects are stochastic, then trying the same picking action multiple times increases the chances of success. Garland & Lesh  garland2002plan share the same objective with us on generating robust plans under incomplete domain models. However, their notion of robustness, which is defined in terms of four different types of risks, only has tenuous heuristic connections with the likelihood of successful execution of plans. Robertson & Bryce  robertson09 focuses on the plan generation in Garland & Lesh model, but their approach still relies on the same unsatisfactory formulation of robustness. The work by Fox et al (fox05) also explores robustness of plans, but their focus is on temporal plans under unforeseen execution-time variations rather than on incompletely specified domains. Our work can also be categorized as one particular instance of the general model-lite planning problem, as defined in [Kambhampati2007], in which the author points out a large class of applications where handling incomplete models is unavoidable due to the difficulty in getting a complete model. ## 3 Problem Formulation We define an incomplete domain model as , where is a set of propositions, is a set of actions that might be incompletely specified. We denote and as the true and false truth values of propositions. A state is a set of propositions. In addition to proposition sets that are known as its preconditions , add effects and delete effects , each action also contains: • Possible precondition set contains propositions that action might need as its preconditions. • Possible add (delete) effect set () contains propositions that the action might add (delete, respectively) after its execution. In addition, each possible precondition, add and delete effect of the action is associated with a weight , and () representing the domain modeler’s assessment of the likelihood that will actually be realized as a precondition, add and delete effect of (respectively) during plan execution. Possible preconditions and effects whose likelihood of realization is not given are assumed to have weights of . Given an incomplete domain model , we define its completion set as the set of complete domain models whose actions have all the necessary preconditions, adds and deletes, and a subset of the possible preconditions, possible adds and possible deletes. Since any subset of , and can be realized as preconditions and effects of action , there are exponentially large number of possible complete domain models , where . For each complete model , we denote the corresponding sets of realized preconditions and effects for each action as , and ; equivalently, its complete sets of preconditions and effects are , and . The projection of a sequence of actions from an initial state according to an incomplete domain model is defined in terms of the projections of from according to each complete domain model : γ(π,I,˜D)=⋃Di∈⟨⟨˜D⟩⟩γ(π,I,Di) (1) where the projection over complete models is defined in the usual STRIPS way, with one important difference. The result of applying an action in a state where the preconditions of are not satisfied is taken to be (rather than as an undefined state).111We shall see that this change is necessary so that we can talk about increasing the robustness of a plan by adding additional actions. A planning problem with incomplete domain is where is the set of propositions that are true in the initial state, and is the set of goal propositions. An action sequence is considered a valid plan for if solves the problem in at least one completion of . Specifically, . Modeling Issues in Annotating Incompleteness: From the modeling point of view, the possible precondition and effect sets can be modeled at either the grounded action or action schema level (and thus applicable to all grounded actions sharing the same action schema). From a practical point of view, however, incompleteness annotations at ground level hugely increase the burden on the domain modeler. To offer a flexible way in modeling the domain incompleteness, we allow annotations that are restricted to either specific variables or value assignment to variables of an action schema. In particular: • Restriction on value assignment to variables: Given variables with domains , one can indicate that is a possible precondition/effect of an action schema when some variables have values (). Those possible preconditions/effects can be specified with the annotation for the action schema . More generally, we allow the domain writer to express a constraint on the variables in the construct. The annotation means that is a possible precondition/effect of an instantiated action () if and only if the assignment satisfies the constraint . This syntax subsumes both the annotations at the ground level when , and at the schema level if (or the construct is not specified). • Restriction on variables: Instead of constraints on explicit values of variables, we also allow the possible preconditions/effects of an action schema to be dependent on some specific variables without any knowledge of their restricted values. This annotation essentially requires less amount of knowledge of the domain incompleteness from the domain writer. Semantically, the possible precondition/effect of an action schema means that (1) there is at least one instantiated action () having as its precondition, and (2) for any two assignments such that (), either both and are preconditions of the corresponding actions, or they are not. Similar to the above, the construct also subsumes the annotations at the ground level when , and at the schema level if (or the field is not specified). Another interesting modeling issue is the correlation among the possible preconditions and effects across actions. In particular, the domain writer might want to say that two actions (or action schemas) will have specific possible preconditions and effects in tandem. For example, we might say that the second action will have a particular possible precondition whenever the first one has a particular possible effect. We note that annotations at the lifted level introduce correlations among possible preconditions and effects at the ground level. Although our notion of plan robustness and approach to generating robust plans (see below) can be adapted to allow such flexible annotations and correlated incompleteness, for ease of exposition we limit our discussion to uncorrelated possible precondition and effect annotations specified at the schema level (i.e. without using the and constructs). ## 4 A Robustness Measure for Plans Given an incomplete domain planning problem , a valid plan (by our definition above) need only to succeed in at least one completion of . Given that can be exponentially large in terms of possible preconditions and effects, validity is too weak to guarantee on the quality of the plan. What we need is a notion that succeeds in most of the highly likely completions of . We do this in terms of a robustness measure. The robustness of a plan for the problem is defined as the cumulative probability mass of the completions of under which succeeds (in achieving the goals). More formally, let be the probability distribution representing the modeler’s estimate of the probability that a given model in is the real model of the world (such that ). The robustness of is defined as follows: R(π,˜P:⟨˜D,I,G⟩)def≡∑Di∈⟨⟨˜D⟩⟩,γ(π,I,Di)⊨GPr(Di) (2) It is easy to see that if , then is a valid plan for . Note that given the uncorrelated incompleteness assumption, the probability for a model can be computed as the product of the weights , , and for all and its possible preconditions/effects if is realized in the model (or the product of their “complement” , , and if is not realized). Example: Figure 1 shows an example with an incomplete domain model with and and a solution plan for the problem . The incomplete model is: , , , , , ; , , , , , . Given that the total number of possible preconditions and effects is 3, the total number of completions () is , for each of which the plan may succeed or fail to achieve , as shown in the table. The robustness value of the plan is if is the uniform distribution. However, if the domain writer thinks that is very likely to be a precondition of and provides , the robustness of decreases to (as intutively, the last four models with which succeeds are very unlikely to be the real one). Note that under the STRIPS model where action failure causes plan failure, the plan would considered failing to achieve in the first two complete models, since is prevented from execution. ### 4.1 A Spectrum of Robust Planning Problems Given this set up, we can now talk about a spectrum of problems related to planning under incomplete domain models: Robustness Assessment (RA): Given a plan for the problem , assess the robustness of . Maximally Robust Plan Generation (RG): Given a problem , generate the maximally robust plan . Generating Plan with Desired Level of Robustness (RG): Given a problem and a robustness threshold (), generate a plan with robustness greater than or equal to . Cost-sensitive Robust Plan Generation (RG): Given a problem and a cost bound , generate a plan of maximal robustness subject to cost bound (where the cost of a plan is defined as the cumulative costs of the actions in ). Incremental Robustification (RI): Given a plan for the problem , improve the robustness of , subject to a cost budget . The problem of assessing robustness of plans, RA, can be tackled by compiling it into a weighted model-counting problem. For plan synthesis problems, we can talk about either generating a maximally robust plan, RG, or finding a plan with a robustness value above the given threshold, RG. A related issue is that of the interaction between plan cost and robustness. Often, increasing robustness involves using additional (or costlier) actions to support the desired goals, and thus comes at the expense of increased plan cost. We can also talk about cost-constrained robust plan generation problem RG. Finally, in practice, we are often interested in increasing the robustness of a given plan (either during iterative search, or during mixed-initiative planning). We thus also have the incremental variant RI. In this paper, we will focus on RG, the problem of synthesizing plan with at least a robustness value of . ## 5 Compilation to Conformant Probabilistic Planning In this section, we will show that the problem of generating plan with at least robustness, RG, can be compiled into an equivalent conformant probabilistic planning problem. The most robust plan can then be found with a sequence of increasing threshold values. ### 5.1 Conformant Probabilistic Planning Following the formalism in [Domshlak and Hoffmann2007], a domain in conformant probabilistic planning (CPP) is a tuple , where and are the sets of propositions and probabilistic actions, respectively. A belief state is a distribution of states (we denote if ). Each action is specified by a set of preconditions and conditional effects . For each , is the condition set and determines the set of outcomes that will add and delete proposition sets , into and from the resulting state with the probability ( , ). All condition sets of the effects in are assumed to be mutually exclusive and exhaustive. The action is applicable in a belief state if for all , and the probability of a state in the resulting belief state is , where is the conditional effect such that , and is the set of outcomes such that . Given the domain , a problem is a quadruple , where is an initial belief state, is a set of goal propositions and is the acceptable goal satisfaction probability. A sequence of actions is a solution plan for if is applicable in the belief state (assuming ), which results in (), and it achieves all goal propositions with at least probability. ### 5.2 Compilation Given an incomplete domain model and a planning problem , we now describe a compilation that translates the problem of synthesizing a solution plan for such that to a CPP problem . At a high level, the realization of possible preconditions and effects , of an action can be understood as being determined by the truth values of hidden propositions , and that are certain (i.e. unchanged in any world state) but unknown. Specifically, the applicability of the action in a state depends on possible preconditions that are realized (i.e. ), and their truth values in . Similarly, the values of and are affected by in the resulting state only if they are realized as add and delete effects of the action (i.e., , ). There are totally realizations of the action , and all of them should be considered simultaneously in checking the applicability of the action and in defining corresponding resulting states. With those observations, we use multiple conditional effects to compile away incomplete knowledge on preconditions and effects of the action . Each conditional effect corresponds to one realization of the action, and can be fired only if whenever , and adding (removing) an effect () into (from) the resulting state depending on the values of (, respectively) in the realization. While the partial knowledge can be removed, the hidden propositions introduce uncertainty into the initial state, and therefore making it a belief state. Since the action may be applicable in some but rarely all states of a belief state, certain preconditions should be modeled as conditions of all conditional effects. We are now ready to formally specify the resulting domain and problem . For each action , we introduce new propositions , , and their negations , , for each , and to determine whether they are realized as preconditions and effects of in the real domain.222These propositions are introduced once, and re-used for all actions sharing the same schema with . Let be the set of those new propositions, then is the proposition set of . Each action is made from one action such that , and consists of conditional effects . For each conditional effect : • is the union of the following sets: • the certain preconditions , • the set of possible preconditions of that are realized, and hidden propositions representing their realization: , • the set of hidden propositions corresponding to the realization of possible add (delete) effects of : (, respectively); • the single outcome of is defined as , , and , where , and represent the sets of realized preconditions and effects of the action. In other words, we create a conditional effect for each subset of the union of the possible precondition and effect sets of the action . Note that the inclusion of new propositions derived from , , and their “complement” sets , , makes all condition sets of the action mutually exclusive. As for other cases (including those in which some precondition in is excluded), the action has no effect on the resulting state, they can be ignored. The condition sets, therefore, are also exhaustive. The initial belief state consists of states such that iff (), each represents a complete domain model and with the probability . The goal is , and the acceptable goal satisfaction probability is . ###### Theorem 1. Given a plan for the problem , and where is the compiled version of () in . Then iff achieves all goals with at least probability in . ###### Proof (sketch). According to the compilation, there is one-to-one mapping between each complete model in and a (complete) state in . Moreover, if has a probability of to be the real model, then also has a probability of in the belief state of . Given our projection over complete model , executing from the state with respect to results in a sequence of complete state . On the other hand, executing from in results in a sequence of belief states . With the note that iff (), by induction it can be shown that iff (). Therefore, iff . Since all actions are deterministic and has a probability of in the belief state of , the probability that achieves is , which is equal to as defined in Equation 2. This proves the theorem. ∎ Example: Consider the action pick-up(?b - ball,?r - room) in the Gripper domain as described above. In addition to the possible precondition (light ?b) on the weight of the ball ?b, we also assume that since the modeler is unsure if the gripper has been cleaned or not, she models it with a possible add effect (dirty ?b) indicating that the action might make the ball dirty. Figure 2 shows both the original and the compiled specification of the action. ## 6 Experimental Results We tested the compilation with Probabilistic-FF (PFF), a state-of-the-art planner, on a range of domains in the International Planning Competition.We first discuss the results on the variants of the Logistics and Satellite domains, where domain incompleteness is deliberately modeled on the preconditions and effects of actions (respectively). Our purpose here is to observe how generated plans are robustified to satisfy a given robustness threshold, and how the amount of incompleteness in the domains affects the plan generation phase. We then describe the second experimental setting in which we randomly introduce incompleteness into IPC domains, and discuss the feasibility of our approach in this setting.333The experiments were conducted using an Intel Core2 Duo 3.16GHz machine with 4Gb of RAM, and the time limit is 15 minutes. Domains with deliberate incompleteness Logistics: In this domain, each of the two cities and has an airport and a downtown area. The transportation between the two distant cities can only be done by two airplanes and . In the downtown area of (), there are three heavy containers that can be moved to the airport by a truck . Loading those containers onto the truck in the city , however, requires moving a team of robots (), initially located in the airport, to the downtown area. The source of incompleteness in this domain comes from the assumption that each pair of robots and () are made by the same manufacturer , both therefore might fail to load a heavy container.444The uncorrelated incompleteness assumption applies for possible preconditions of action schemas specified for different manufacturers. It should not be confused here that robots and of the same manufacturer can independently have fault. The actions loading containers onto trucks using robots made by a particular manufacturer (e.g., the action schema load-truck-with-robots-of-M1 using robots of manufacturer ), therefore, have a possible precondition requiring that containers should not be heavy. To simplify discussion (see below), we assume that robots of different manufacturers may fail to load heavy containers, though independently, with the same probability of . The goal is to transport all three containers in the city to , and vice versa. For this domain, a plan to ship a container to another city involves a step of loading it onto the truck, which can be done by a robot (after moving it from the airport to the downtown). Plans can be made more robust by using additional robots of different manufacturer after moving them into the downtown areas, with the cost of increasing plan length. Satellite: In this domain, there are two satellites and orbiting the planet Earth, on each of which there are instruments (, ) used to take images of interested modes at some direction in the space. For each , the lenses of instruments ’s were made from a type of material , which might have an error affecting the quality of images that they take. If the material actually has error, all instruments ’s produce mangled images. The knowledge of this incompleteness is modeled as a possible add effect of the action taking images using instruments made from (for instance, the action schema take-image-with-instruments-M1 using instruments of type ) with a probability of , asserting that images taken might be in a bad condition. A typical plan to take an image using an instrument, e.g. of type on the satellite , is first to switch on , turning the satellite to a ground direction from which can be calibrated, and then taking image. Plans can be made more robust by using additional instruments, which might be on a different satellite, but should be of different type of materials and can also take an image of the interested mode at the same direction. Table 1 and 2 shows respectively the results in the Logistics and Satellite domains with and . The number of complete domain models in the two domains is . For Satellite domain, the probabilities ’s range from , ,… to when increases from , , … to . For each specific value of and , we report where is the length of plan and is the running time (in seconds). Cases in which no plan is found within the time limit are denoted by “–”, and those where it is provable that no plan with the desired robustness exists are denoted by “”. Observations on fixed value of : In both domains, for a fixed value of we observe that the solution plans tend to be longer with higher robustness threshold , and the time to synthesize plans is also larger. For instance, in Logistics with , the plan returned has actions if , whereas -length plan is needed if increases to . Since loading containers using the same robot multiple times does not increase the chance of success, more robots of different manufacturers need to move into the downtown area for loading containers, which causes an increase in plan length. In the Satellite domain with , similarly, the returned plan has actions when , but requires actions if —more actions need to calibrate an instrument of different material types in order to increase the chance of having a good image of interested mode at the same direction. Since the cost of actions is currently ignored in the compilation approach, we also observe that more than the needed number of actions have been used in many solution plans. In the Logistics domain, specifically, it is easy to see that the probability of successfully loading a container onto a truck using robots of () different manufacturers is . As an example, however, robots of all five manufacturers are used in a plan when , whereas using those of three manufacturers is enough. Observations on fixed value of : In both domains, we observe that the maximal robustness value of plans that can be returned increases with higher number of manufacturers (though the higher the value of is, the higher number of complete models is). For instance, when there is not any plan returned with at least in the Logistics domain, and with in the Satellite domain. Intuitively, more robots of different manufacturers offer higher probability of successfully loading a container in the Logistics domain (and similarly for instruments of different materials in the Satellite domain). Finally, it may take longer time to synthesize plans with the same length when is higher—in other words, the increasing amount of incompleteness of the domain makes the plan generation phase harder. As an example, in the Satellite domain, with it takes seconds to synthesize a -length plan when there are possible add effects at the schema level of the domain, whereas the search time is only seconds when . With , no plan is found within the time limit when , although a plan with robustness of exists in the solution space. It is the increase of the branching factors and the time spent on satisfiability test and weighted model-counting used inside the planner that affect the search efficiency. Domains with random incompleteness We built a program to generate an incomplete domain model from a deterministic one by introducing new propositions into each domain (all are initially ). Some of those new propositions were randomly added into the sets of possible preconditions/effects of actions. Some of them were also randomly made certain add/delete effects of actions. With this strategy, each solution plan in an original deterministic domain is also a valid plan, as defined earlier, in the corresponding incomplete domain. Our experiments with the Depots, Driverlog, Satellite and ZenoTravel domains indicate that because the annotations are random, there are often fewer opportunities for the PFF planner to increase the robustness of a plan prefix during the search. This makes it hard to generate plans with a desired level of robustness under given time constraint. In summary, our experiments on the two settings above suggest that the compilation approach based on the PFF planner would be a reasonable method for generating robust plans in domains and problems where there are chances for robustifying existing action sequences in the search space. ## 7 Conclusion and Future Work In this paper, we motivated the need for synthesizing robust plans under incomplete domain models. We introduced annotations for expressing domain incompleteness, formalized the notion of plan robustness, and showed an approach to compile the problem of generating robust plans into conformant probabilistic planning. We presented empirical results showing the promise of our approach. For future work, we are developing a planning approach that directly takes the incompleteness annotations into account during the search, and compare it with our current compilation method. We also plan to consider the problem of robustifying a given plan subject to a provided cost bound. Acknowledgement: This research is supported in part by ONR grants N00014-09-1-0017 and N00014-07-1-1049, the NSF grant IIS-0905672, and by DARPA and the U.S. Army Research Laboratory under contract W911NF-11-C-0037. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. We thank William Cushing for several helpful discussions. ## References • [Bryce, Kambhampati, and Smith2006] Bryce, D.; Kambhampati, S.; and Smith, D. 2006. Sequential monte carlo in probabilistic planning reachability heuristics. Proceedings of ICAPS’06. • [Domshlak and Hoffmann2007] Domshlak, C., and Hoffmann, J. 2007. Probabilistic planning via heuristic forward search and weighted model counting. JAIR 30(1):565–620. • [Fox, Howey, and Long2006] Fox, M.; Howey, R.; and Long, D. 2006. Exploration of the robustness of plans. In AAAI. • [Garland and Lesh2002] Garland, A., and Lesh, N. 2002. Plan evaluation with incomplete action descriptions. In AAAI. • [Hoffmann, Weber, and Kraft2010] Hoffmann, J.; Weber, I.; and Kraft, F. 2010. SAP Speaks PDDL. AAAI. • [Hyafil and Bacchus2003] Hyafil, N., and Bacchus, F. 2003. Conformant probabilistic planning via CSPs. In Proceedings of the Thirteenth International Conference on Automated Planning and Scheduling, 205–214. • [Jensen, Veloso, and Bryant2004] Jensen, R.; Veloso, M.; and Bryant, R. 2004. Fault tolerant planning: Toward probabilistic uncertainty models in symbolic non-deterministic planning. In ICAPS. • [Kambhampati2007] Kambhampati, S. 2007. Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain theories. In AAAI. • [Kushmerick, Hanks, and Weld1995] Kushmerick, N.; Hanks, S.; and Weld, D. 1995. An algorithm for probabilistic planning. Artificial Intelligence 76(1-2):239–286. • [Robertson and Bryce2009] Robertson, J., and Bryce, D. 2009. Reachability heuristics for planning in incomplete domains. In ICAPS’09 Workshop on Heuristics for Domain Independent Planning.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476926684379578, "perplexity": 1144.9798426752557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00266.warc.gz"}
https://www.elitedigitalstudy.com/920/you-have-two-solutions-a-and-b-the-ph-of-solution-a-is-6-and-the-ph-of-solution-b-is-8
You have two solutions, A and B. The pH of solution A is 6 and the pH of solution B is 8. Which solution has more hydrogen ion concentration? Which of this is acidic and which one is basic? Asked by Shivani Kumari | 1 year ago |  202 ##### Solution :- In order to find the hydrogen ion concentration, we can use the rule that states, “The pH of any solution is inversely proportional to the hydrogen ion concentration”. Therefore, it means that the solution that has a lower pH number will have a higher hydrogen ion concentration. Hence, solution A will have a higher hydrogen ion concentration. In addition, solution B will be basic and A will be acidic. Answered by Vishal kumar | 1 year ago ### Related Questions #### Provide Some Important MCQ Questions for Class 10 Science Acids, Bases and Salts Provide Some Important MCQ Questions for Class 10 Science Acids, Bases and Salts with Answers? #### Give two important uses of washing soda and baking soda Give two important uses of washing soda and baking soda. #### What is a neutralisation reaction? Give two examples What is a neutralisation reaction? Give two examples. #### Plaster of Paris should be stored in a moisture-proof container. Explain why? Plaster of Paris should be stored in a moisture-proof container. Explain why?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418863415718079, "perplexity": 2810.3153555082727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00233.warc.gz"}
https://cassidyslangscam.wordpress.com/tag/tintriocht/
# Tantrum Nobody knows where the word tantrum comes from, though it has been around for at least three hundred years.  Some sources say that it originally had the meaning of penis. How it came to mean a fit of temper is unknown. Daniel Cassidy claims that the word derives from the Irish teintrighim, which is defined as ‘I flash forth’. This is a really bizarre claim and there is absolutely no chance that it is correct. For one thing, tintrím (modern spelling) is not a verb which is given in Ó Dónaill, though it is given as teintrighim in Dinneen. While teintrighim is given as a headword in Dinneen (who tends to give the first person form of verbs), it is hard to see how ‘I flash forth’ would really be used in any imaginable context, except by some Celtic thunder god. There is certainly no evidence of the word teintrighim being used as a noun like tantrum. A tantrum would usually be translated in Irish as a racht feirge or a taghd or a spadhar. There is no evidence even of tintríocht (the abstract noun meaning fieriness) being used to mean tantrum. As usual in Cassidy’s ridiculous book, it’s complete nonsense.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783146142959595, "perplexity": 2627.3105731904634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00420.warc.gz"}
http://cms.math.ca/cmb/kw/dual%20space
On the Smirnov Class Defined by the Maximal Function H.~O.~Kim has shown that contrary to the case of $H^p$-space, the Smirnov class $M$ defined by the radial maximal function is essentially smaller than the classical Smirnov class of the disk. In the paper we show that these two classes have the same corresponding locally convex structure, {\it i.e.} they have the same dual spaces and the same Fr\'echet envelopes. We describe a general form of a continuous linear functional on $M$ and multiplier from $M$ into $H^p$, $0 < p \leq \infty$. Keywords:Smirnov class, maximal radial function, multipliers, dual space, Fréchet envelopeCategories:46E10, 30A78, 30A76
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747117757797241, "perplexity": 345.73507304652367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657121288.75/warc/CC-MAIN-20140914011201-00230-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://kerodon.net/tag/00VD
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Warning 3.2.1.9. Notation 3.2.1.8 has the potential to create confusion. If $(X,x)$ and $(Y,y)$ are pointed simplicial sets and $f: X \rightarrow Y$ is a morphism satisfying $f(x) = y$, then we use the notation $[f]$ to represent both the homotopy class of $f$ as a map of simplicial sets (that is, the image of $f$ in the set $\pi _0( \operatorname{Fun}(X,Y) )$), and the pointed homotopy class of $f$ as a map of pointed simplicial sets (that is, the image of $f$ in the set $[X,Y]_{\ast } = \pi _{0}( \operatorname{Fun}(X,Y) \times _{ \operatorname{Fun}( \{ x\} , Y) } \{ y\} )$). Beware that these usages are not the same: in general, it is possible for a pair of pointed morphisms $f,g: X \rightarrow Y$ to be homotopic without being pointed homotopic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983541965484619, "perplexity": 90.27671157686541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00487.warc.gz"}
https://www.lesswrong.com/posts/jzf4Rcienrm6btRyt/priors-as-mathematical-objects
# 50 Followup to:  "Inductive Bias" What exactly is a "prior", as a mathematical object?  Suppose you're looking at an urn filled with red and white balls.  When you draw the very first ball, you haven't yet had a chance to gather much evidence, so you start out with a rather vague and fuzzy expectation of what might happen - you might say "fifty/fifty, even odds" for the chance of getting a red or white ball.  But you're ready to revise that estimate for future balls as soon as you've drawn a few samples.  So then this initial probability estimate, 0.5, is not repeat not a "prior". An introduction to Bayes's Rule for confused students might refer to the population frequency of breast cancer as the "prior probability of breast cancer", and the revised probability after a mammography as the "posterior probability". But in the scriptures of Deep Bayesianism, such as Probability Theory: The Logic of Science, one finds a quite different concept - that of prior information, which includes e.g. our beliefs about the sensitivity and specificity of mammography exams. Our belief about the population frequency of breast cancer is only one small element of our prior information. In my earlier post on inductive bias, I discussed three possible beliefs we might have about an urn of red and white balls, which will be sampled without replacement: • Case 1:  The urn contains 5 red balls and 5 white balls; • Case 2:  A random number was generated between 0 and 1, and each ball was selected to be red (or white) at this probability; • Case 3:  A monkey threw balls into the urn, each with a 50% chance of being red or white. In each case, if you ask me - before I draw any balls - to estimate my marginal probability that the fourth ball drawn will be red, I will respond "50%".  And yet, once I begin observing balls drawn from the urn, I reason from the evidence in three different ways: • Case 1:  Each red ball drawn makes it less likely that future balls will be red, because I believe there are fewer red balls left in the urn. • Case 2:  Each red ball drawn makes it more plausible that future balls will be red, because I will reason that the random number was probably higher, and that the urn is hence more likely to contain mostly red balls. • Case 3:  Observing a red or white ball has no effect on my future estimates, because each ball was independently selected to be red or white at a fixed, known probability. Suppose I write a Python program to reproduce my reasoning in each of these scenarios.  The program will take in a record of balls observed so far, and output an estimate of the probability that the next ball drawn will be red.  It turns out that the only necessary information is the count of red balls seen and white balls seen, which we will respectively call R and W.  So each program accepts inputs R and W, and outputs the probability that the next ball drawn is red: • Case 1:  return (5 - R)/(10 - R - W)    # Number of red balls remaining / total balls remaining • Case 2:  return (R + 1)/(R + W + 2)    # Laplace's Law of Succession • Case 3:  return 0.5 These programs are correct so far as they go.  But unfortunately, probability theory does not operate on Python programs.  Probability theory is an algebra of uncertainty, a calculus of credibility, and Python programs are not allowed in the formulas.  It is like trying to add 3 to a toaster oven. To use these programs in the probability calculus, we must figure out how to convert a Python program into a more convenient mathematical object - say, a probability distribution. Suppose I want to know the combined probability that the sequence observed will be RWWRR, according to program 2 above.  Program 2 does not have a direct faculty for returning the joint or combined probability of a sequence, but it is easy to extract anyway.  First, I ask what probability program 2 assigns to observing R, given that no balls have been observed.  Program 2 replies "1/2".  Then I ask the probability that the next ball is R, given that one red ball has been observed; program 2 replies "2/3".  The second ball is actually white, so the joint probability so far is 1/2 * 1/3 = 1/6.  Next I ask for the probability that the third ball is red, given that the previous observation is RW; this is summarized as "one red and one white ball", and the answer is 1/2.  The third ball is white, so the joint probability for RWW is 1/12.  For the fourth ball, given the previous observation RWW, the probability of redness is 2/5, and the joint probability goes to 1/30.  We can write this as p(RWWR|RWW) = 2/5, which means that if the sequence so far is RWW, the probability assigned by program 2 to the sequence continuing with R and forming RWWR equals 2/5.  And then p(RWWRR|RWWR) = 1/2, and the combined probability is 1/60. We can do this with every possible sequence of ten balls, and end up with a table of 1024 entries.  This table of 1024 entries constitutes a probability distribution over sequences of observations of length 10, and it says everything the Python program had to say (about 10 or fewer observations, anyway).  Suppose I have only this probability table, and I want to know the probability that the third ball is red, given that the first two balls drawn were white.  I need only sum over the probability of all entries beginning with WWR, and divide by the probability of all entries beginning with WW. We have thus transformed a program that computes the probability of future events given past experiences, into a probability distribution over sequences of observations. You wouldn't want to do this in real life, because the Python program is ever so much more compact than a table with 1024 entries.  The point is not that we can turn an efficient and compact computer program into a bigger and less efficient giant lookup table; the point is that we can view an inductive learner as a mathematical object, a distribution over sequences, which readily fits into standard probability calculus.  We can take a computer program that reasons from experience and think about it using probability theory. Why might this be convenient?  Say that I'm not sure which of these three scenarios best describes the urn - I think it's about equally likely that each of the three cases holds true.  How should I reason from my actual observations of the urn?  If you think about the problem from the perspective of constructing a computer program that imitates my inferences, it looks complicated - we have to juggle the relative probabilities of each hypothesis, and also the probabilities within each hypothesis.  If you think about it from the perspective of probability theory, the obvious thing to do is to add up all three distributions with weightings of 1/3 apiece, yielding a new distribution (which is in fact correct).  Then the task is just to turn this new distribution into a computer program, which turns out not to be difficult. So that is what a prior really is - a mathematical object that represents all of your starting information plus the way you learn from experience. # 50 New Comment I'm confused when you say that the prior represents all your starting information plus the way you learn from experience. Isn't the way you learn from experience fixed, in this framework? Given that you are using Bayesian methods, so that the idea of a prior is well defined, then doesn't that already tell how you will learn from experience? Hal, with a poor prior, "Bayesian updating" can lead to learning in the wrong direction or to no learning at all. Bayesian updating guarantees a certain kind of consistency, but not correctness. (If you have five city maps that agree with each other, they might still disagree with the city.) You might think of Bayesian updating as a kind of lower level of organization - like a computer chip that runs programs, or the laws of physics that run the computer chip - underneath the activity of learning. If you start with a maxentropy prior that assigns equal probability to every sequence of observations, and carry out strict Bayesian updating, you'll still never learn anything; your marginal probabilities will never change as a result of the Bayesian updates. Conversely, if you somehow had a good prior but no Bayesian engine to update it, you would stay frozen in time and no learning would take place. To learn you need a good prior and an updating engine. Taking a picture requires a camera, light - and also time. This probably deserves its own post. Another thing I don't fully understand is the process of "updating" a prior. I've seen different flavors of Bayesian reasoning described. In some, we start with a prior, get some information and update the probabilities. This new probability distribution now serves as our prior for interpreting the next incoming piece of information, which then causes us to further update the prior. In other interpretations, the priors never change; they are always considered the initial probability distribution. We then use those prior probabilities plus our sequence of observations since then to make new interpretations and predictions. I gather that these can be considered mathematically identical, but do you think one or the other is a more useful or helpful way to think of it? In this example, you start off with uncertainty about which process put in the balls, so we give 1/3 probability to each. But then as we observe balls coming out, we can update this prior. Once we see 6 red balls for example, we can completely eliminate Case 1 which put in 5 red and 5 white. We can think of our prior as our information about the ball-filling process plus the current state of the urn, and this can be updated after each ball is drawn. Hal, You are being a bad boy. In his earlier discussion Eliezer made it clear that he did not approve of this terminology of "updating priors." One has posterior probability distributions. The prior is what one starts with. However, Eliezer has also been a bit confusing with his occasional use of such language as a "prior learning." I repeat, agents learn, not priors, although in his view of the post-human computerized future, maybe it will be computerized priors that do the learning. The only way one is going to get "wrong learning" at least somewhat asymptotically is if the dimensionality is high and the support is disconnected. Eliezer is right that if one starts off with a prior that is far enough off, one might well have "wrong learning," at least for awhile. But, unless the conditions I just listed hold, eventually the learning will move in the right direction and head towards the correct answer, or probability distribution, at least that is what Bayes' Theorem asserts. OTOH, the reference to "deep Bayesianism" raises another issue, that of fundamental subjectivism. There is this deep divide among Bayesians between the ones that are ultimately classical frequentists but who argue that Bayesian methods are a superior way of getting to the true objective distribution, and the deep subjectivist Bayesians. For the latter, there are no ultimately "true" probability distributions. We are always estimating something derived out of our subjective priors as updated by more recent information, wherever those priors came from. Also, saying a prior should the known probability distribution, say of cancer victims, assumes that this probability is somehow known. The prior is always subject to how much information the assumer of a prior has when they being their process of estimation. [-][anonymous]12y 0 Eliezer may not approve of it, but almost all of the literature uses the phrase "updating a prior" to mean exactly the type of sequential learning from evidence that Eliezer discusses. I prefer to think of it as 'updating a prior'. Bayes' theorem tells you that data is an operator on the space of probability distributions, converting prior information into posterior information. I think it's helpful to think of that process as 'updating' so that my prior actually changes to something new before the next piece of information comes my way. Eliezer , Just to be clear . . . going back to your first paragraph, that 0.5 is a prior probability for the outcome of one draw from the urn (that is, for the random variable that equals 1 if the ball is red and 0 if the ball is white). But, as you point out, 0.5 is not a prior probability for the series of ten draws. What you're calling a "prior" would typically be called a "model" by statisticians. Bayesians traditionally divide a model into likelihood, prior, and hyperprior, but as you implicitly point out, the dividing line between these is not clear: ultimately, they're all part of the big model. Barkley, I think you may be regarding likelihood distributions as fixed properties held in common by all agents, whereas I am regarding them as variables folded into the prior - if you have a probability distribution over sequences of observables, it implicitly includes beliefs about parameters and likelihoods. Where agents disagree about prior likelihood functions, not just prior parameter probabilities, their beliefs may trivially fail to converge. Andrew's point may be particularly relevant here - it may indeed be that statisticians call what I am talking about a "model". (Although in some cases, like the Laplace's Law of Succession inductor, I think they might call it a "model class"?) Jaynes, however, would have called it our "prior information" and he would have written "the probability of A, given that we observe B" as p(A|B,I) where I stands for all our prior beliefs including parameter distributions and likelihood distributions. While we may often want to discriminate between different models and model classes, it makes no sense to talk about discriminating between "prior informations" - your prior information is everything you start out with. Eliezer, I am very interested in the Bayesian approach to reasoning you've outlined on this site, it's one of the more elegant ideas I've ever run into. I am a bit confused, though, about to what extent you are using math directly when assessing truth claims. If I asked you for example "what probability do you assign to the proposition 'global warming is anthropogenic' ?" (say), would you tell me a number? Or is this mostly about conceptually understanding that P(effect|~cause) needs to be taken into account? If it's a number, what's your heuristic for getting there (i.e., deciding on a prior probability & all the other probabilities)? If there's a post that goes into that much detail, I haven't seen it yet, though your explanations of Bayes theorem generally are brilliant. My reason for writing this is not to correct Eliezer. Rather, I want to expand on his distinction between prior information and prior probability. Pages 87-89 of Probability Theory: the Logic of Science by E. T. Jaynes (2004 reprint with corrections, ISBN 0 521 59271 2) is dense with important definitions and principles. The quotes below are from there, unless otherwise indicated. Jaynes writes the fundamental law of inference as P(H|DX) = P(H|X) P(D|HX) / P(D|X) (4.3) Which the reader may be more used to seeing as P(H|D) = P(H) P(D|H) / P(D) Where H = some hypothesis to be tested D = the data under immediate consideration X = all other information known X is the misleadingly-named ‘prior information’, which represents all the information available other than the specific data D that we are considering at the moment. “This includes, at the very least, all it’s past experiences, from the time it left the factory to the time it received its current problem.” --Jaynes p.87, referring to a hypothetical problem-solving robot. It seems to me that in practice, X ends up being a representation of a subset of all prior experience, attempting to discard only what is irrelevant to the problem. In real human practice, that representation may be wrong and may need to be corrected. “ ... to our robot, there is no such thing as an ‘absolute’ probability; all probabilities are necessarily conditional on X at the least.” “Any probability P(A|X) which is conditional on X alone is called a prior probability. But we caution that ‘prior’ ... does not necessarily mean ‘earlier in time’ ... the distinction is purely a logical one; any information beyond the immediate data D of the current problem is by definition ‘prior information’.” “Indeed, the separation of the totality of the evidence into two components called ‘data’ and ‘prior information’ is an arbitrary choice made by us, only for our convenience in organizing a chain of inferences.” Please note his use of the word ‘evidence’. Sampling theory, which is the basis of many treatments of probability, “ ... did not need to take any particular note of the prior information X, because all probabilities were conditional on H, and so we could suppose implicitly that the general verbal prior information defining the problem was included in H. This is the habit of notation that we have slipped into, which has obscured the unified nature of all inference.” “From the start, it has seemed clear how one how one determines numerical values of of sampling probabilities¹ [e.g. P(D|H) ], but not what determines prior probabilities [AKA ‘priors’ e.g. P(H|X)]. In the present work we shall see that this s only an artifact of the unsymmetrical way of formulating problems, which left them ill-posed. One could see clearly how to assign sampling probabilities because the hypothesis H was stated very specifically; had the prior information X been specified equally well, it would have been equally clear how to assign prior probabilities.” Jaynes never gives up on that X notation (though the letter may differ), he never drops it for convenience. “When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify prior information before we have a well-posed problem, it becomes clear that ... exactly the same principles are needed to assign either sampling probabilities or prior probabilities ...” That is, P(H|X) should be calculated. Keep your copy of Kendall and Stuart handy. I think priors should not be cheaply set from an opinion, whim, or wish. “ ... it would be a big mistake to think of X as standing for some hidden major premise, or some universally valid proposition about Nature.” The prior information has impact beyond setting prior probabilities (priors). It informs the formulation of the hypotheses, of the model, and of “alternative hypotheses” that come to mind when the data seem to be showing something really strange. For example, data that seems to strongly support psychokinesis may cause a skeptic to bring up a hypothesis of fraud, whereas a career psychic researcher may not do so. (see Jaynes pp.122-125) I say, be alert for misinformation, biases, and wishful thinking in your X. Discard everything that is not evidence. I’m pretty sure the free version Probability Theory: The Logic of Science is off line. You can preview the book here: http://books.google.com/books?id=tTN4HuUNXjgC&printsec=frontcover&dq=Probability+Theory:+The+Logic+of+Science&cd=1#v=onepage&q&f=false . FOOTNOTES 1. There are massive compendiums of methods for sampling distributions, such as • Feller (An Introduction to Probability Theory and its Applications, Vol1, J. Wiley & Sons, New York, 3rd edn 1968 and Vol 2. J. Wiley & Sons, New York, 2nd edn 1971) and Kendall and • Stuart (The Advanced Theory of Statistics: Volume 1, Distribution Theory, McMillan, New York 1977). ** Be familiar with what is in them. Edited 05/05/2010 to put in the actual references. Then the task is just to turn this new distribution into a computer program, which turns out not to be difficult. Can someone please provide a hint how? Here's some Python code to calculate a prior distribution from a rule for assigning probability to the next observation. A "rule" is represented as a function that takes as a first argument the next observation (like "R") and as a second argument all previous observations (a string like "RRWR"). I included some example rules at the end. EDIT: oh man, what happened to my line spacing? my indents? jeez. EDIT2: here's a dropbox link: https://www.dropbox.com/s/16n01acrauf8h7g/prior_producer.py from functools import reduce def prod(sequence): '''Product equivalent of python's "sum"''' return reduce(lambda a, b: a*b, sequence) def sequence_prob(rule, sequence): '''Probability of a sequence like "RRWR" using the given rule for computing the probability of the next observation. To put it another way: computes the joint probability mass function.''' return prod([rule(sequence[i], sequence[:i]) \ for i in range(len(sequence))]) def number2sequence(number, length): '''Convert a number like 5 into a sequence like WWRWR. The sequence corresponds to the binary digit representation of the number: 5 --> 00101 --> WWRWR This is convenient for listing all sequences of a given length.''' binary_representation = bin(number)[2:] seq_end = binary_representation.replace('1', 'R').replace('0', 'W') if len(seq_end) > length: raise ValueError('no sequence of length {} with number {}'\ .format(length, number)) # Now add W's to the beginning to make it the right length - # like adding 0's to the beginning of a binary number return ''.join('W' for i in range(length - len(seq_end))) + seq_end def prior(rule, n): '''Generate a joint probability distribution from the given rule over all sequences of length n. Doesn't feed the rule any background knowledge, so it's a prior distribution.''' sequences = [number2sequence(i, n) for i in range(2**n)] return [(seq, sequence_prob(rule, seq)) for seq in sequences] And here's some examples of functions that can be used as the "rule" arguments. def laplaces_rule(next, past): R = past.count('R') W = past.count('W') if R + W != len(past): raise ValueError('knowledge is not just of red and white balls') red_prob = (R + 1)/(R + W + 2) if next == 'R': return red_prob elif next == 'W': return 1 - red_prob else: raise ValueError('can only predict whether next will be red or white') def antilaplaces_rule(next, past): return 1 - laplaces_rule(next, past) So just to be clear. There are two things, the prior probability, which is the value P(H|I), and the back ground information which is 'I'. So P(H|D,I_1) is different from P(H|D,I_2) because they are updates using the same data and the same hypothesis, but with different partial background information, they are both however posterior probabilities. And the priors P(H_I_1) may be equal to P(H|I_2) even if I_1 and I_2 are radically different and produce updates in opposite directions given the same data. P(H|I) is still called the prior probability, but it is smething very differnet from the background information which is essentially just I. Is this right? Let me be more specific. Let's say my prior information is case1, then P( second ball is R| first ball is R & case1) = 4/9 If my prior information was case2, then P( second ball is R| first ball is R & case2) = 2/3 [by the rule of succession] and P( first ball is R| case1) = 50% = P( first ball is R|case2) This is why different prior information can make you learn in different directions, even if two prior informations produce the same prior probability? Please let me know if i am making any sort of mistake. Or if I got it right, either way. No really, i really want help. Please help me understand if I am confused, and settle my anxiety if I am not confused. You got it right. The three different cases correspond to different joint distributions over sequences of outcomes. Prior information that one of the cases obtains amounts to picking one of these distributions (of course, one can also have weighted combinations of these distributions if there is uncertainty about which case obtains). It turns out that in this example, if you add together the probabilities of all the sequences that have a red ball in the second position, you will get 0.5 for each of the three distributions. So equal prior probabilities. But even though the terms sum to 0.5 in all three cases, the individual terms will not be the same. For instance, prior information of case 1 would assign a different probability to RRRRR (0.004) than prior information of case 2 (0.031). So the prior information is a joint distribution over sequences of outcomes, while the prior probability of the hypothesis is (in this example at least) a marginal distribution calculated from this joint distribution. Since multiple joint distributions can give you the same marginal distribution for some random variable, different prior information can correspond to the same prior probability. When you restrict attention to those sequences that have a red ball in the first position, and now add together the (appropriately renormalized) joint probabilities of sequences with a red ball in the second position, you don't get the same number with all three distributions. This corresponds to the fact that the three distributions are associated with different learning rules. One can update one's beliefs about one's existing beliefs and the ways in which one learns from experience too – click. Under standard assumptions about the drawing process, you only need 10 numbers, not 1024: P(the urn initially contained ten white balls), P(the urn initially contained nine white balls and one red one), P(the urn initially contained eight white balls and two red ones), and so on through P(one white ball and nine red ones). (P(ten red balls) equals 1 minus everything else.) P(RWRWWRWRWW) is then P(4R, 6W) divided by the appropriate binomial coefficient. So then this initial probability estimate, 0.5, is not repeat not a "prior". This really confuses me. Considering the Universe in your example, which consists only of the urn with the balls, wouldn't one of the prior hypotheses(e.g. case 2) be a prior and have all the necessary information to compute the lookup table? In other words aren't the three following equivalent in the urn-with-balls universe? 1. Hypothesis 2 + bayesian updating 2. Python program 2 3. The lookup table generated from program 2 + Procedure for calculating conditional probability(e.g. if you want to know the probability that the third ball is red, given that the first two balls drawn were white.) Unless I am misunderstanding you, yes, that's precisely the point. I don't understand why you are confused, though. None of these are, after all, numbers in (0,1), which would not contain any information as to how you would go about doing your updates given more evidence. So then this initial probability estimate, 0.5, is not repeat not a "prior". 1:1 odds seems like it would be a default null prior, especially because one round of Bayes' Rule updates it immediately to whatever your first likelihood ratio is, kind of like the other mathematical identities.  If your priors represent "all the information you already know", then it seems like you (or someone) must have gotten there through a series of Bayesian inferences, but that series would have to start somewhere, right?   If (in the real universe, not the ball & urn universe) priors aren't determined by some chain of Bayesian inference, but instead by some degree of educated guesses / intuition / dead reckoning, wouldn't that make the whole process subject to a "garbage in, garbage out" fallacy(?). For a use case: A, low internal resolution rounded my posterior probability to 0 or 1, and now new evidence is not updating my estimations anymore, or B, I think some garbage crawled into my priors, but I'm not sure where.  In either case, I want to take my observations, and rebuild my chain of inferences from the ground up, to figure out where I should be.   So... where is the ground?  If 1:1 odds is not the null prior, not the Bayesian Identity, then what is?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827474057674408, "perplexity": 839.7255405714786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00413.warc.gz"}
https://stats.stackexchange.com/questions/340696/estimate-e-using-monte-carlo-methods-with-fast-convergence
# Estimate $e$ using monte carlo methods with fast convergence [closed] I'd like to estimate $e$ using monte carlo methods. What is the monte carlo method that converges faster when $n \to \infty$? Can you show some simulations using different $n$'s for this method? ## closed as too broad by Michael Chernick, mdewey, gung - Reinstate Monica♦Apr 16 '18 at 13:29 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • See stats.stackexchange.com/questions/193990/… for multiple approaches. Does it answer your question? – Tim Apr 16 '18 at 9:44 • @Tim It doesn't answer my question because some responses use $e$ itself to estimate it and others aren't monte carlo methods, at least, not unbiased, because the expected value is changing. I think that the Aksakal's answer is good, but I'd like other approaches. – Leo Ribeiro Apr 16 '18 at 15:08 Given that there are some very efficient deterministic methods to calculate $e$ (up to a given number of decimal places), I would be surprised if there is any stochastic Monte-Carlo method that can hold a candle to these. Nevertheless, I'll try to get the ball rolling by giving one possible estimator. To be clear, I make absolutely no claim to efficiency here, but I'll give an estimator, and hopefully others will be able to offer better methods. I will assume for the purposes of this question that you are able to generate and use a sequence of $n$ uniform pseudo-random variables $U_1, \cdots , U_n \sim \text{IID U}(0,1)$ and you then need to estimate $e$ by some method using basic arithmetic operations.$^\dagger$ The present method is motivated by a simple result involving uniform random variables: $$\mathbb{E} \Bigg( \frac{\mathbb{I}(U_i \geqslant 1 / e) }{U_i} \Bigg) = \int \limits_{1/e}^1 \frac{du}{u} = 1.$$ Estimating $e$ using this result: We first order the sample values into descending order to obtain the order statistics $u_{(1)} \geqslant \cdots \geqslant u_{(n)}$ and then we define the partial sums: $$S_n(k) \equiv \frac{1}{n} \sum_{i=1}^k \frac{1}{u_{(i)}} \quad \text{for all } k = 1, .., n.$$ Now, let $m \equiv \min \{ k | S(k) \geqslant 1 \}$ and then estimate $1/e$ by interpolation of the ordered uniform variables. This gives an estimator for $e$ given by: $$\hat{e} \equiv \frac{2}{u_{(m)} + u_{(m+1)}}.$$ This method has some slight bias (owing to the linear interpolation of the cut-off point for $1/e$) but it is a consistent estimator for $e$. The method can be implemented fairly easily but it requires sorting of values, which is more computationally intensive than deterministic calculation of $e$. This method is slow, since it involves sorting of values. Implementation in R: The method can be implemented in R using runif to generate uniform values. The code is as follows: EST_EULER <- function(n) { U <- sort(runif(n), decreasing = TRUE); S <- cumsum(1/U)/n; m <- min(which(S >= 1)); 2/(U[m-1]+U[m]); } Implementing this code gives convergence to the true value of $e$, but it is very slow compared to deterministic methods. set.seed(1234); EST_EULER(10^3); [1] 2.715426 EST_EULER(10^4); [1] 2.678373 EST_EULER(10^5); [1] 2.722868 EST_EULER(10^6); [1] 2.722207 EST_EULER(10^7); [1] 2.718775 EST_EULER(10^8); [1] 2.718434 > exp(1) [1] 2.718282 $^\dagger$ Obviously we want to avoid any method that makes use of any transformation that involves an exponential or logarithm. The only way we could employ these would be if we already have $e$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451926112174988, "perplexity": 461.8580985511224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00473.warc.gz"}
http://digitalhaunt.net/Kentucky/calculation-of-relative-standard-error.html
Address 4322 Alexandria Pike, Cold Spring, KY 41076 (513) 275-7005 http://paramounttechgroup.com # calculation of relative standard error Erlanger, Kentucky The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate. The relative standard error shows if the standard error is large relative to the results; large relative standard errors suggest the results are not significant. Correction for finite population The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered Sokal and Rohlf (1981)[7] give an equation of the correction factor for small samples ofn<20. Provide feedback Follow us on... For points 1, 3, 3, and 5 where the mean is 3 and the standard deviation is 1.4, RSD = (1.4/3)*100 = 46.67%. Transposition errors occur when ... American Statistician. Watch the video, or read on below: The relative standard deviation (RSD) is a special form of the standard deviation (std dev). Next, consider all possible samples of 16 runners from the population of 9,732 runners. Hyattsville, MD: U.S. Net Operating Loss - NOL A period in which a company's allowable tax deductions are greater than its taxable income, resulting in a negative taxable ... If one survey has a standard error of $10,000 and the other has a standard error of$5,000, then the relative standard errors are 20% and 10% respectively. This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held The margin of error of 2% is a quantitative measure of the uncertainty – the possible difference between the true proportion who will vote for candidate A and the estimate of The standard deviation of the age for the 16 runners is 10.23. The smaller standard deviation for age at first marriage will result in a smaller standard error of the mean. What is the definition of Chebyshev's theorem? Can one nuke reliably shoot another out of the sky? The standard error of the mean (SEM) (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all The result is expressed as an percentage, with a low number (<2.5%) indicating a small spread of values and a high value indicating a significant spread of results. The standard error (SE) is the standard deviation of the sampling distribution of a statistic,[1] most commonly of the mean. Percentage relative standard deviation is a widely used statistical tool but strangely there is no automated function in any version of Microsoft Excel. The data set must be larger than 5 for a meaningful result. Back to top Importance of Standard Errors It is important to consider the Standard Error when using LFS estimates as it affects the accuracy of the estimates and, therefore, the importance Transposition Error A simple error of data entry. On this page: Why do we have Standard Errors? The Relative Standard Error (RSE) is the standard error expressed as a fraction of the estimate and is usually displayed as a percentage. This following %RSD example is based upon a data set of 5 values. You may want to read this previous article first: How to find Standard Deviation The relative standard deviation formula is: 100 * s / |x̄| Where: s = the sample standard Student approximation when σ value is unknown Further information: Student's t-distribution §Confidence intervals In many practical applications, the true value of σ is unknown. In fact, data organizations often set reliability standards that their data must reach before publication. Ecology 76(2): 628 – 639. ^ Klein, RJ. "Healthy People 2010 criteria for data suppression" (PDF). Larger sample sizes give smaller standard errors As would be expected, larger sample sizes give smaller standard errors. More variety is likely to result in a higher standard deviation. n is the size (number of observations) of the sample. Taxes 10 Steps To Help Erase Errors On Your Credit Report According to a study conducted by the Federal Trade Commission, one in four consumers identified errors on their reports that Read Answer >> What is standard deviation used for in mutual funds? See how standard deviation is helpful in evaluating a mutual fund's performance. ISBN 0-521-81099-X ^ Kenney, J. I used the standard deviation calculator to solve this. Roman letters indicate that these are sample values. Above: Coefficient of variation formulas. The standard deviation of all possible sample means of size 16 is the standard error. Understand what tracking error is and learn about the significant difference it can represent for investors who favor index ... Is there a proof of infinitely many primes p such that p-2 and p+2 are composite numbers? This estimate may be compared with the formula for the true standard deviation of the sample mean: SD x ¯   = σ n {\displaystyle {\text{SD}}_{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} Later sections will present the standard error of other statistics, such as the standard error of a proportion, the standard error of the difference of two means, the standard error of Correlation Coefficient Formula 6. For the purpose of this example, the 9,732 runners who completed the 2012 run are the entire population of interest. A medical research team tests a new drug to lower cholesterol. The distribution of the mean age in all possible samples is called the sampling distribution of the mean. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432597756385803, "perplexity": 743.9655177575215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00068.warc.gz"}
https://mathhelpboards.com/threads/unique-factorization-domain-nature-of-q_z-x-1.4808/
# Unique Factorization Domain? Nature of Q_Z[x] - 1 #### Peter ##### Well-known member MHB Site Helper Jun 22, 2012 2,918 Unique Factorization Domain? Nature of Q_Z[x] Let [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX] denote the set of polynomials with rational coefficients and integer constant terms. (a) If p is prime in [TEX] \mathbb{Z} [/TEX], prove that the constant polynomial p is irreducible in [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX]. (b) If p and q are positive primes in [TEX] \mathbb{Z} [/TEX], prove that p and q are not associates in [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX] I am unsure of my thinking on these problems. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Regarding (a) I think the solution is as follows: We need to show the p is irreducible in [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX] That is if p = ab for [TEX] p, a, b \in \mathbb{Q}_\mathbb{Z}[x] [/TEX] then at least one of a or b must be a unit But then we must have p = 1.p = p.1 since p is a prime in [TEX] \mathbb{Z} [/TEX] - BUT is it prime in [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX] (can someone help here???) But 1 is a unit in [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX] (and also in [TEX] \mathbb{Z} [/TEX]) - I have yet to properly establish this!) Thus p is irreducible in [TEX] \mathbb{Q}_\mathbb{Z}[x][/TEX] --------------------------------------------------------------------------------------------------------------------------------------------------------------------- Could someone please either confirm that my working is correct in (a) or let me know if my reasoning is incorrect or lacking in rigour. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Help with the general approach for (b) would be appreciated Peter [This has also been posted on MHF} Last edited:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688779711723328, "perplexity": 1731.9863465917372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00047.warc.gz"}
http://artent.net/2012/10/20/a-tutorial-on-direct-optimization/
# Matlab code and a Tutorial on DIRECT Optimization Yves Brise created this nice set of slides describing the DIRECT algorithm for Lipschitz functions.  Tim Kelly of Drexel university provides Matlab code here. 1. The slide shown doesn’t appear right. Nelder-mead, and Hooke-Jeaves are both derivative free. 2. Yes, neither Nelder-Mead nor Hooke-Jeaves use the actual derivative. Nelder-Mead in effect does a gradient descent. By evaluating the function at the vertices of the simplex, it figures out approximately the direction of the gradient and uses that to determine the next evaluation. So it is quite similar to steepest descent. Hooke-Jeaves also is similar to gradient descent because it evaluates points near the best current estimate of the minimum. So despite the fact that they are derivative free, it seems to me that they behave similarly to gradient descent. 3. Also, Conjugate Gradient is not derivative free.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279299974441528, "perplexity": 1354.3475110501315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00571.warc.gz"}
https://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-10th-edition/chapter-2-acute-angles-and-right-triangles-section-2-5-further-applications-of-right-triangles-2-5-exercises-page-83/36
## Trigonometry (10th Edition) Let $x$ be the distance from the vertical line to the point with an elevation of $22.667^{\circ}$. We can write an expression for the height $h$: $\frac{h}{x} = tan~22.667^{\circ}$ $h = x~tan~22.667^{\circ}$ We can use the second point to write another equation for the height $h$: $\frac{h}{x+7} = tan~10.833^{\circ}$ $h = (x+7)~tan~10.833^{\circ}$ We can equate the two expressions to find $x$: $x~tan~22.667^{\circ} = (x+7)~(tan~10.833^{\circ})$ $0.4176~x = 0.1914~x+1.3395$ $0.4176~x - 0.1914~x = 1.3395$ $x = \frac{1.3395}{0.2262}$ $x = 5.9218~km$ We can use the first equation to find $h$: $h = x~tan~22.667^{\circ}$ $h = (5.9218~km)~tan~22.667^{\circ}$ $h = 2.5~km$ The height of the top of Mount Whitney above the level road is 2.5 km
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900394082069397, "perplexity": 86.80852823494895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584520525.90/warc/CC-MAIN-20190124100934-20190124122934-00580.warc.gz"}
http://mathhelpforum.com/calculus/4797-help-function-u-x-chain-rule-print.html
# Help with function u of x (chain rule) • August 7th 2006, 05:46 PM Yogi_Bear_79 Help with function u of x (chain rule) $Given f(x) =\frac{(x^2 +3x +1)^5}{(x+3)^5}$, identify a function of u of x and an integer n not equal to 1 such that $f(x)=u^n$. Then compute $f'(x)$. • August 7th 2006, 06:57 PM ThePerfectHacker Quote: Originally Posted by Yogi_Bear_79 $Given f(x) =\frac{(x^2 +3x +1)^5}{(x+3)^5}$, identify a function of u of x and an integer n not equal to 1 such that $f(x)=u^n$. Then compute $f'(x)$. You can redo this to, $\left( \frac{x^2+3x+1}{x+3} \right)^5$ Therefore, $u=\frac{x^2+3x+1}{x+3}$ Thus, $\frac{du}{dx}=\frac{(x^2+3x+1)'(x+3)-(x^2+3x+1)(x+3)'}{(x+3)^2}$ Thus, $\frac{du}{dx}=\frac{(2x+3)(x+3)-(x^2+3x+1)(1)}{(x+3)^2}$ Thus, $\frac{du}{dx}=\frac{2x^2+9x+9-x^2-3x-1}{(x+3)^2}$ Thus, $\frac{du}{dx}=\frac{x^2+6x+8}{x^2+6x+9}=\frac{x^2+ 6x+9-1}{x^2+6x+9}=1-(x+3)^{-2}$ And, $\frac{dy}{du}=5u^4$ Thus, $\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}=5\cdot \left( 1-(x+3)^{-2} \right) \cdot \frac{(x^2+3x+1)^4}{(x+3)^4}$ Thus, $\frac{5(x^2+3x+1)}{(x+3)^4}-5(x^2+3x+1)(x+3)^2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335139393806458, "perplexity": 1338.3346986815468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00007-ip-10-60-113-184.ec2.internal.warc.gz"}
https://jasoncollins.blog/2020/01/22/ergodicity-economics-a-primer/
# Ergodicity economics: a primer In my previous posts on loss aversion (here, here and here), I foreshadowed a post on how “ergodicity economics” might shed some light on whether we need loss aversion to explain people’s choices under uncertainty. This was to be that post, but the background material that I drafted is long enough to be a stand alone piece. I’ll turn to the application of ergodicity economics to loss aversion in a future post. The below is largely drawn from presentations and papers by Ole Peters and friends, with my own evolutionary take at the end. For a deeper dive, see the lecture notes by Peters and Alexander Adamou, or a recent Perspective by Peters in Nature Physics. The choice Suppose you have $100 and are offered a gamble involving a series of coin flips. For each flip, heads will increase your wealth by 50%. Tails will decrease it by 40%. Flip 100 times. The expected payoff What will happen? For that first flip, you have a 50% chance of a$50 gain, and a 50% chance of a $40 loss. Your expected gain (each outcome weighted by its probability, 0.5*$50 + 0.5*-$40) is$5 or 5% of your wealth. The absolute size of the stake for future flips will depend on past flips, but for every flip you have the same expected gain of 5% of your wealth. Should you take the bet? I simulated 10,000 people who each started with $100 and flipped the coin 100 times each. This line in Figure 1 represents the mean wealth of the 10,000 people. It looks good, increasing roughly in accordance with the expected gain, despite some volatility, and finishing at a mean wealth of over$16,000. Figure 1: Average wealth of population Yet people regularly decline gambles of this nature. Are they making a mistake? One explanation for declining this gamble is risk aversion. A risk averse person will value the expected outcome of a gamble lower than the same sum with certainty. Risk aversion can be represented through the concept of utility, where each level of wealth gives subjective value (utility) for the gambler. If people maximise utility instead of the value of a gamble, it is possible that a person would reject the bet. For example, one common utility function to represent a risk averse individual is to take the logarithm of each level of wealth. If we apply the log utility function to the gamble above, the gambler will reject the offer of the coin flip. [The maths here is simply that the expected utility of the gamble is 0.5*ln(150) + 0.5*ln(60)=4.55, which is less than the utility of the sure $100, ln(100)=4.61.] The time average growth rate For a different perspective, below is the plot for the first 20 of these 10,000 people. Interestingly, only two people do better than break even (represented by the black line at$100). The richest has less than $1,000 at period 100. Figure 2: Path of first 20 people What is happening here? The first plot shows that the average wealth across all 10,000 people is increasing. When we look at the first 20 individuals, their wealth generally declines. Even those that make money make less than the gain in aggregate wealth would suggest. To show this more starkly, here is a plot of the first 20 people on a log scale, together with the average wealth for the full population. They are all below average in final wealth. Figure 3: Plot of first 20 people against average wealth (log scale) If we examine the full population of 10,000, we see an interesting pattern. The mean wealth is over$16,000, but the median wealth after 100 periods is 51 cents, a loss of over 99% of the initial wealth. 54% of the population ends up with less than $1. 86% finishes with less than the initial wealth of$100. Yet 171 people end up with more than $10,000. The wealthiest person finishes with$117 million, which is over 70% of the total wealth of the population. For most people, the series of bets is a disaster. It looks good only on average, propped up by the extreme good luck and massive wealth of a few people. The expected payoff does not match the experience of most people. Four possible outcomes One way to think about what is happening is to consider the four possible outcomes over the first two periods. The first person gets two heads. They finish with $225. The second and third person get a heads and a tails (in different orders), and finish with$90. The fourth person ends up with $36. The average across the four is$110.25, reflecting the compound 5% growth. That’s our positive picture. But three of the four lost money. As the number of flips increases, the proportion who lose money increases, with a rarer but more extraordinarily rich cohort propping up the average. Almost surely Over the very long-term, an individual will tend to get around half heads and half tails. As the number of flips goes to infinite, the number of heads and tails is “almost surely” equal. This means that each person will tend to get a 50% increase half the time (or 1.5 times the initial wealth), and a 40% decrease half the time (60% of the initial wealth). A bit of maths and the time average growth in wealth for an individual is (1.5*0.6)0.5 ~ 0.95, or approximately a 5% decline in wealth each period. Every individual’s wealth will tend to decay at that rate. To get an intuition for this, a long run of equal numbers of heads and tails is equivalent to flipping a head and a tail every two periods. Suppose that is exactly what you did – flipped a heads and then flipped a tail. Your wealth would increase to $150 in the first round ($100*1.5), and then decline to $90 in the second ($150*0.6). You get the same result if you change the order. Effectively, you are losing 10% (or getting only 1.5*0.6=0.9) of your money every two periods. A system where the time average converges to the ensemble average (our population mean) is known as an ergodic system. The system of gambles above is non-ergodic as the time average and the ensemble average diverge. And given we cannot individually experience the ensemble average, we should not be misled by it. The focus on ensemble averages, as is typically done in economics, can be misleading if the system is non-ergodic. The longer term How can we reconcile this expectation of loss when looking at the time average growth with the continued growth of the wealth of some people after 100 periods? It does not seem that everyone is “almost surely” on the path to ruin. But they are. If we plot the simulation for, say, 1,000 periods rather than 100, there are few winners. Here’s a plot of the average wealth of the population for 1000 periods (the first 100 being as previously shown), plus a log plot of that same growth (Figures 4 and 5). Figure 4: Plot of average wealth over 1000 periods Figure 5: Plot of average wealth over 1000 periods (log plot) We can see that despite a large peak in wealth around period 400, wealth ultimately plummets. Average wealth at period 1000 is $24, below the starting average of$100, with a median wealth of 1×10-21 (rounding to the nearest cent, that is zero). The wealthiest person has $242 thousand dollars, with that being 98.5% of the total wealth. If we followed that wealthy person for another 1000 generations, I would expect them to be wiped out too. [I tested that – at 2000 periods the wealthiest person had$4×10-7.] Despite the positive expected value, the wealth of the entire population is wiped out. Losing wealth on a positive value bet The first 100 periods of bets forces us to hold a counterintuitive idea in our minds. While the population as an aggregate experiences outcomes reflecting the positive expected value of the bet, the typical person does not. The increase in wealth across the aggregate population is only due to the extreme wealth of a few lucky people. However, the picture over 1000 periods appears even more confusing. The positive expected value of the bet is nowhere to be seen. How could this be the case? The answer to this lies in the distribution of bets. After 100 periods, one person had 70% of the wealth. We no longer have 10,000 equally weighted independent bets as we did in the first round. Instead, the path of the wealth of the population is largely subject to the outcome of the bets by this wealthy individual. As we have already shown, the wealth path for an individual almost surely leads to a compound 5% loss of wealth. That individual’s wealth is on borrowed time. The only way for someone to maintain their wealth would be to bet a smaller portion of their wealth, or to diversify their wealth across multiple bets. The Kelly criterion On the first of these options, the portion of a person’s wealth they should enter as stakes for a positive expected value bet such as this is given by the Kelly Criterion. The Kelly criterion gives the bet size that would maximise the geometric growth rate in wealth. The Kelly criterion formula for a simple bet is as follows: $f=\frac{bp-q}{b}=\frac{p(b+1)-1}{b}$ where f is the fraction of the current bankroll to wager b is the net odds received on the wager (i.e. you receive $b back on top of the$1 wagered for the bet) p is the probability of winning q is the probability of losing (1-p) For the bet above, we have p=0.5 and $b=\frac{0.5}{0.4}=1.25$. As offered, we are effectively required to bet f=0.4, or 40% of our wealth, for that chance to win a 50% increase. However, if we apply the above formula given p and b, a person should bet $\frac{(0.5*(1.25+1)-1)}{1.25}=0.1$, or 10%, of their wealth each round to maximise the geometric growth rate. The Kelly criterion is effectively maximising the expected log utility of the bet through setting the size of the bet. The Kelly criterion will result in someone wanting to take a share of any bet with positive expected value. The Kelly bet “almost surely”” leads to higher wealth than any other strategy in the long run. If we simulate the above scenarios, but risking only 10% of wealth each round rather than 40% (i.e. heads wealth will increase by 12.5%, tails it will decrease by 10%), what happens? The expected value of the Kelly bet is 0.5*0.125+0.5*-0.1=0.0125 or 1.25% per round. This next figure shows the ensemble average, showing a steady increase. Figure 6: Average wealth of population applying Kelly criterion (1000 periods) If we look at the individuals in this population, we can also see that their paths more closely resemble that of the population average. Most still under-perform the mean (the system is still non-ergodic – the time average growth rate is ((1.125*0.9)0.5=1.006 or 0.6%), and there is large wealth disparity with the wealthiest person having 36% of the total wealth after 1000 periods (after 100, they have 0.5% of the wealth). Still most people are better off, with 70% and 95% of the population experiencing a gain after 100 and 1000 periods respectively. The median wealth is almost $50,000 after the 1000 periods. Figure 7: Plot of first 20 people applying Kelly criterion against average wealth (log scale, 1000 periods) Unfortunately, given our take it or leave it choice we opened with involving 40% of our wealth, we can’t use the Kelly Criterion to optimise the bet size and should refuse the bet. Update clarifying some comments on this post: An alternative more general formula for the Kelly criterion that can be used for investment decisions is: $f=\frac{p}{a}-\frac{q}{b}$ where f is the fraction of the current bankroll to invest b is the value by which your investment increases (i.e. you receive$b back on top of each $1 you invested) a is the value by which your investment decreases if you lose (the first formula above assumes a=1) p is the probability of winning q is the probability of losing (1-p) Applying this formula to the original bet at the beginning of this post, a=0.4 and b=0.5, by which f=0.5/0.4-0.5/0.5=0.25 or 25%. Therefore, you should put up 25% of your wealth, of which you could potentially lose 40% or win 50%. This new formulation of the Kelly criterion gives the same recommendation as the former, but refers to different baselines. In the first case, the optimal bet is 10% of your wealth, which provides for a potential win of 12.5%. In the second case, you invest 25% of your wealth to possibly get a 50% return (12.5% of your wealth) or lose 40% of your investment (40% of 25% which is 10%). Despite the same effective recommendation, in one case you talk of f being 10%, and in the second 25%. Evolving preferences Suppose two types of agent lived in this non-ergodic world and their fitness was dependent on the outcome of the 50:50 bet for a 50% gain or 40% loss. One type always accepted the bet, the other always rejected it. Which would come to dominate the population? An intuitive reaction to the above examples might be that while the accepting type might have a short term gain, in the long run they are almost surely going to drive themselves extinct. There are a couple of scenarios where that would be the case. One is where the children of a particular type were all bound to the same coin flip as their siblings for subsequent bets. Suppose one individual had over 1 million children after 100 periods, comprising around 70% of the population (which is what they would have if we borrowed the above simulations for our evolutionary scenario, with one coin flip per generation). If all had to bet on exactly the same coin flip in period 101 and beyond, they are doomed. If, however, each child faces their own coin flip (experiencing, say, idiosyncratic risks), that crash never comes. Instead the risk of those flips is diversified and the growth of the population more closely resembles the ensemble average, even over the very long term. Below is a chart of population for a simulation of 100 generations of the accepting population, starting with a population of 10,000. For this simulation I have assumed that at the end of each period, the accepting types will have a number of children equal to the proportional increase in their wealth. For example, if they flip heads, they will have 1.5 children, For tails, they will have 0.6 children. They then die. (The simulation works out largely the same if I make the number of children probabilistic in accord with those numbers.) Each child takes their own flip. Figure 8: Population of accepting types This has an expected population growth rate of 5%. This evolutionary scenario differs from Kelly criterion in that the accepting types are effectively able to take many independent shares of the bet for a tiny fraction of their inclusive fitness. In a Nature Physics paper summarising some of his work, Peters writes: [I]n maximizing the expectation value – an ensemble average over all possible outcomes of the gamble – expected utility theory implicitly assumes that individuals can interact with copies of themselves, effectively in parallel universes (the other members of the ensemble). An expectation value of a non-ergodic observable physically corresponds to pooling and sharing among many entities. That may reflect what happens in a specially designed large collective, but it doesn’t reflect the situation of an individual decision-maker. For a replicating entity that is able to diversify future bets across many offspring, they are able to do just this. There are a lot of wrinkles that could be thrown into this simulation. How many bets does someone have to make before they reproduce and effectively diversify their future? The more bets, the higher the chance of a poor end. There is also the question of whether bets by children would be truly independent (Imagine a highly-related tribe). Risk and loss aversion in ergodicity economics In my next post on this topic I ask whether, given the above, we need risk and loss aversion to explain our choices. ## Code Below is the R code used for generation of the simulations and figures. Load the required packages: library(ggplot2) library(scales) #use the percent scale later Create a function for the bets. bet <- function(p,pop,periods,gain,loss, ergodic=FALSE){ #p is probability of a gain #pop is how many people in the simulation #periods is the number of coin flips simulated for each person #if ergodic=FALSE, gain and loss are the multipliers #if ergodic=TRUE, gain and loss are the dollar amounts params <- as.data.frame(c(p, pop, periods, gain, loss, ergodic)) rownames(params) <- c("p", "pop", "periods", "gain", "loss", "ergodic") colnames(params) <- "value" sim <- matrix(data = NA, nrow = periods, ncol = pop) if(ergodic==FALSE){ for (j in 1:pop) { x <- 100 #x is the number of dollars each person starts with for (i in 1:periods) { outcome <- rbinom(n=1, size=1, prob=p) ifelse(outcome==0, x <- x*loss, x <- x*gain) sim[i,j] <- x } } } if(ergodic==TRUE){ for (j in 1:pop) { x <- 100 #x is the number of dollars each person starts with for (i in 1:periods) { outcome <- rbinom(n=1, size=1, prob=p) ifelse(outcome==0, x <- x-loss, x <- x+gain) sim[i,j] <- x } } } sim <- rbind(rep(100,pop), sim) #placing the$x starting sum in the first row sim <- cbind(seq(0,periods), sim) #number each period sim <- data.frame(sim) colnames(sim) <- c("period", paste0("p", 1:pop)) sim <- list(params=params, sim=sim) sim } Simulate 10,000 people who accept a series of 1000 50:50 bets to win 50% of their wealth or lose 40%. set.seed(20191215) nonErgodic <- bet(p=0.5, pop=10000, periods=1000, gain=1.5, loss=0.6, ergodic=FALSE) Create a function for plotting the average wealth of the population over a set number of periods. averagePlot <- function(sim, periods=100){ basePlot <- ggplot(sim$sim[c(1:(periods+1)),], aes(x=period)) + labs(y = "Average Wealth ($)") averagePlot <- basePlot + geom_line(aes(y = rowMeans(sim$sim[c(1:(periods+1)),2:(sim$params[2,]+1)])), color = 1, size=1) averagePlot } Plot the average outcome of these 10,000 people over 100 periods (Figure 1). averagePlot(nonErgodic, 100) Create a function for plotting the path of individuals in the population over a set number of periods. individualPlot <- function(sim, periods, people){ basePlot <- ggplot(sim$sim[c(1:(periods+1)),], aes(x=period)) + labs(y = "Wealth ($)") for (i in 1:people) { basePlot <- basePlot + geom_line(aes_string(y = sim$sim[c(1:(periods+1)),(i+1)]), color = 2) #need to use aes_string rather than aes to get all lines to print rather than just last line } basePlot } Plot of the path of the first 20 people over 100 periods (Figure 2). nonErgodicIndiv <- individualPlot(nonErgodic, 100, 10) nonErgodicIndiv Plot both the average outcome and first twenty people on the same plot using a log scale (Figure 3). logPlot <- function(sim, periods, people) { individualPlot(sim, periods, people) + geom_line(aes(y = rowMeans(sim$sim[c(1:(periods+1)),2:(sim$params[2,]+1)])), color = 1, size=1) + scale_y_log10() } nonErgodicLogPlot <- logPlot(nonErgodic, 100, 20) nonErgodicLogPlot Create a function to generate summary statistics. summaryStats <- function(sim, period=100){ meanWealth <- mean(as.matrix(sim$sim[(period+1),2:(sim$params[2,]+1)])) medianWealth <- median(as.matrix(sim$sim[(period+1),2:(sim$params[2,]+1)])) numDollar <- sum(sim$sim[(period+1),2:(sim$params[2,]+1)]<=1) #number with less than a dollar numGain <- sum(sim$sim[(period+1),2:(sim$params[2,]+1)]>=100) #number who gain num10000 <- sum(sim$sim[(period+1),2:(sim$params[2,]+1)]>=10000) #number who finish with more than$10,000 winner <- max(sim$sim[(period+1),2:(sim$params[2,]+1)]) #wealth of wealthiest person winnerShare <- winner / sum(sim$sim[(period+1),2:(sim$params[2,]+1)]) #wealth share of wealthiest person print(paste0("mean: $", round(meanWealth, 2))) print(paste0("median:$", round(medianWealth, 2))) print(paste0("number with less than a dollar: ", numDollar)) print(paste0("number who gained: ", numGain)) print(paste0("number that finish with more than $10,000: ", num10000)) print(paste0("wealth of wealthiest person:$", round(winner))) print(paste0("wealth share of wealthiest person: ", percent(winnerShare))) } Generate summary statistics for the population and wealthiest person after 100 periods summaryStats(nonErgodic, 100) Plot the average wealth of the non-ergodic simulation over 1000 periods (Figure 4). averagePlot(nonErgodic, 1000) Plot the average wealth of the non-ergodic simulation over 1000 periods using a log plot (Figure 5). averagePlot(nonErgodic, 1000)+ scale_y_log10() Calculate some summary statistics about the population and the wealthiest person after 1000 periods. summaryStats(nonErgodic, 1000) Kelly criterion bets Calculate the optimum Kelly bet size. p <- 0.5 q <- 1-p b <- (1.5-1)/(1-0.6) f <- (b*p-q)/b f Run a simulation using the optimum bet size. set.seed(20191215) kelly <- bet(p=0.5, pop=10000, periods=1000, gain=1+f*b, loss=1-f, ergodic=FALSE) Plot ensemble average of Kelly bets (Figure 6). averagePlotKelly <- averagePlot(kelly, 1000) averagePlotKelly Plot of the path of the first 20 people over 1000 periods (Figure 7). logPlotKelly <- logPlot(kelly, 1000, 20) logPlotKelly Generate summary stats after 1000 periods of the Kelly simulation summaryStats(kelly, 1000) Evolutionary simulation Simulate the population of accepting types. set.seed(20191215) evolutionBet <- function(p,pop,periods,gain,loss){ #p is probability of a gain #pop is how many people in the simulation #periods is the number of generations simulated params <- as.data.frame(c(p, pop, periods, gain, loss)) rownames(params) <- c("p", "pop", "periods", "gain", "loss") colnames(params) <- "value" sim <- matrix(data = NA, nrow = periods, ncol = 1) sim <- rbind(pop, sim) #placing the starting population in the first row for (i in 1:periods) { for (j in 1:round(pop)) { outcome <- rbinom(n=1, size=1, prob=p) ifelse(outcome==0, x <- loss, x <- gain) pop <- pop + (x-1) } pop <- round(pop) print(i) sim[i+1] <- pop #"+1" as have starting population in first row } sim <- cbind(seq(0,periods), sim) #number each period sim <- data.frame(sim, row.names=NULL) colnames(sim) <- c("period", "pop") sim <- list(params=params, sim=sim) sim } evolution <- evolutionBet(p=0.5, pop=10000, periods=100, gain=1.5, loss=0.6) #more than 100 periods can take a very long time, simulation slows markedly as population grows Plot the population growth for the evolutionary scenario (Figure 8). basePlotEvo <- ggplot(evolution\$sim[c(1:101),], aes(x=period)) expectationPlotEvo <- basePlotEvo + geom_line(aes(y=pop), color = 1, size=1) + labs(y = "Population") expectationPlotEvo ## 9 thoughts on “Ergodicity economics: a primer” 1. Workhorse says: “If we simulate the above scenarios, but betting only 10% of wealth each round rather than 40%…”. In your original scenario, you are betting 100% of wealth, not 40% 1. You’re right given the way I’ve described the first bet. I should use the word “risking” rather than “betting” in the quoted sentence. Cheers. 2. Workhorse may be alluding to a more standard nomenclature. The bet fraction f, in that nomenclature, is the proportion of my wealth which I subject to the effects of the gamble. For instance, f=2 means I can lose in one round 80% of my total wealth; or win 100% of my total wealth. f=1 means I can lose in one round 40% of my wealth; or win 50% of my wealth. f=1/2 means I keep 1/2 of my wealth safe and only subject the other half to the effects of the gamble, so that I either win 25% or lose 20% of my total wealth, and so on. The optimal bet fraction, in this nomenclature, is f=1/4, meaning I either win 12.5% or lose 10% of my total current wealth. The wikipedia entry for the Kelly criterion https://en.wikipedia.org/wiki/Kelly_criterion gives the following formula for the case you’re treating: f*=p/a-q/b (5th equation from the top). This corresponds to the nomenclature Workhorse is alluding to. Nothing wrong here, but I, too, got confused by the nomenclature. 1. I see what you mean. On the choice, am I subjecting 100% of my wealth to the gamble, or 40%? You could make a case for either frame. I like the simple bet version of the Kelly Criterion formula – assuming a=1 as in the first formula in the Wikipedia article – as when I’m switching back and forth thinking about multiplicative or additive scenarios, I find it easier to to have f as simply the portion of my wealth that I could lose. Still, I’ll probably add a couple of sentences above to note the alternative framing using the more general formula. 3. The situation becomes easier to understand if you make it more extreme: With tails, your wealth gets multiplied by 2.01. Now it’s obvious: Given an infinite population, the aggregate wealth will go up each flip. But this is an average of 1-2^(-N) probability of being bankrupt, and 2^-N probability of being insanely wealthy. Thus, when you simulate a fixed, finite number of people, it won’t take too long before every last one of them is bankrupt. 4. Aris says: Can you clarify the two different bet sizes: is risking 10% optimal or is risking 25% optimal? Why do the two Kelly formulas give different answers to what appears to be the same question? 1. Here’s another shot at explaining it:- The 10%: Think of the bet as multiplying your stake by 1.25 (or offering 1.25:1 odds). In that case, bet 10% of your wealth. The 25%: Think of an investment whereby you have a 50% chance of losing 40% of your investment, and a 50% chance of gaining 50%. In that case, invest 25%. Each are equivalent, but which framing is used tends to vary with user. The first framing is common in betting as you typically lose your whole stake. The second works well for investments where the result typically isn’t a complete loss. Which is the best framing for the ergodicity example, I’m not sure. In the first framing, you are asked to bet 40% of your wealth to possibly win 50% (plus return of your original stake). In the second you are asked to invest your whole wealth to possibly lose 40% of it or gain 50% of it. The second framing is just the first version with the “investment” scaled up by a factor of 2.5, hence the 10%/25% difference between the two. 5. Alexander Bruk says: Thanks, great post, very interesting!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331077694892883, "perplexity": 2050.9102365247295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00599.warc.gz"}
http://science.sciencemag.org/content/342/6163/1169.4.full
# Comment on “Poverty Impedes Cognitive Function” See allHide authors and affiliations Science  06 Dec 2013: Vol. 342, Issue 6163, pp. 1169 DOI: 10.1126/science.1246680 ## Abstract Mani et al. (Research Articles, 30 August, p. 976) presented laboratory experiments that aimed to show that poverty-related worries impede cognitive functioning. A reanalysis without dichotomization of income fails to corroborate their findings and highlights spurious interactions between income and experimental manipulation due to ceiling effects caused by short and easy tests. This suggests that effects of financial worries are not limited to the poor. Mani et al. (1) recently presented four laboratory experiments and a field study that aimed to show that poverty impedes cognitive functioning. We criticize their results on statistical and psychometric grounds. Mani et al. ran three randomized experiments in which U.S. adults were assigned to read one of two sets of financial scenarios that differed in their activation of financial concerns. Although participants’ income varied from $7560 to$160,000, Mani et al. used a median split to analyze income data. This procedure has been criticized strongly for being associated with lower power, loss of information on individual differences, and its inability to pinpoint nonlinear relations (2). Of the two measures of cognitive functioning in Mani et al.’s studies, only the Raven’s scores are fairly symmetrically distributed. We therefore submitted these data to linear regressions involving family income (mean-centered to facilitate interpretation) and an interaction between income and the type of scenario. Results are given in Table 1. In none of the three core experiments (1, 3, and 4) was the interaction significant when analyzed without unnecessary dichotomization of income. We also analyzed data from study 2, which aimed to show that the effect of poverty-related worries could be distinguished from a form of test anxiety and would not occur in similar, but nonfinancial, scenarios. We note that the second experiment is appreciably smaller (N = 39 people) than the other three experiments (N > 95 people) and so is associated with lower statistical power. Of importance are the regression weights; those from study 2 are not appreciably different than those in the core studies. Table 1 Linear regressions of Raven’s accuracy on mean-centered income and scenario and the interaction between income and scenario. Income is mean-centered to improve interpretability and avoid multicollinearity. Conditional and unconditional bootstrapping corroborated these results. B indicates unstandardized regression weight, with standard error (SE). View this table: The second measure of cognitive functioning employed by Mani et al., cognitive control, showed non normal distributions that render them unsuitable for linear analyses (see Fig. 1). The measure was developed specifically to assess cognitive control among children and showed clear ceiling effects, as it did in earlier work involving adults (3). Because higher-income adults outperform lower-income adults, the easiness of the control test is particularly problematic in the higher-income range; more than half of the participants in the above-median income group acquired a perfect or near-perfect score (11 or 12 correct out of 12 items). In fact, the negative skew was so extreme that satisfactory normalization of the scores using a Box-Cox transformation was impossible. However, if the transformed, platykurtotic scores are subjected to a linear regression, the interaction is no longer significant in two of the three core experiments. Had the test been able to discriminate between higher levels of cognitive control, the difference between financial scenarios might have been established for the rich participants also. Hence, the core interaction that was meant to indicate that the poverty-related scenario only affected the poor may be an artifact of the cognitive control test’s being too easy (4, 5). Latent variable modeling could be used to deal with such issues (68). We note that a highly relevant potential confound in the field study presented by Mani et al. is the possibility of retesting effects. The lack of any retesting effect in Mani et al.’s field study involving Indian farmers is clearly at odds with one of the more robust findings in the literature on cognitive testing (9). Retesting effects on the Raven’s tests are particularly profound among test-takers with little education (10). Mani et al. go beyond the data by concluding that “The poor…are less capable not because of inherent traits, but because the very context of poverty…impedes cognitive capacity.” We note that the correlation between income and IQ also appears in longitudinal studies in which IQ was measured years before incomes (11). Further research is needed to fully grasp whether poverty indeed affects cognitive performance, as proposed by Mani et al., or whether the effect found in their experiments is a test artifact. The stronger cognitive impediment experienced by the poor could merely be the result of an inappropriate statistical test and an overly easy cognitive control measure. The latter could obscure an equally “threatening” effect in the rich, simply because they were unable to obtain higher scores when not threatened. With such methodological issues remaining to be addressed, the authors’ proposal of far-reaching policy changes, such as timing HIV educational campaigns to harvest cycles, seems premature. View Abstract
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8011749386787415, "perplexity": 2595.483946559163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00032.warc.gz"}
http://forum.dominionstrategy.com/index.php?topic=11041.925
# Dominion Strategy Forum • April 07, 2020, 10:01:43 am • Welcome, Guest Please login or register. Login with username, password and session length ### News: DominionStrategy Wiki Pages: 1 ... 36 37 [38] 39 40 ... 45  All ### AuthorTopic: Maths thread.  (Read 121647 times) 0 Members and 1 Guest are viewing this topic. #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #925 on: November 05, 2017, 03:55:07 am » 0 This is probably an easy problem, but what is sum(k from 0 to n) [(k+1) * (k out of n)] I figured it can be rewritten as sum (k from 0 to n (k out of n)) + sum (k from 1 to n (k out of n)) + ... + sum (k from n to n (k out of n)) but couldn't solve that either. Logged #### navical • Golem • Offline • Posts: 190 • Respect: +254 ##### Re: Maths thread. « Reply #926 on: November 05, 2017, 04:19:32 am » 0 What do you mean by "k out of n"? Is it "n choose k" i.e. n!/(k!(n-k)!)? Logged #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #927 on: November 05, 2017, 04:51:29 am » 0 yes. (In german we say "n over k" because that's how the operator looks, but that's understood as n/k in english, so I used "k out of n" because that's what it counts.) Logged #### faust • Margrave • Offline • Posts: 2631 • Shuffle iT Username: faust • Respect: +3682 ##### Re: Maths thread. « Reply #928 on: November 05, 2017, 08:34:30 am » 0 Well, the sum over all (n choose k) is 2^n, I feel like that should be part of the solution. Thus, you can rewrite to 2^n+2^n-(n choose 0)+2^n-(n choose 0)-(n choose 1)+... or in other words n*2^n-sum(k from 0 to n-1) (n-1-k)*(n choose k) Also (n choose k)=(n-1 choose k-1)+(n-1 choose k) Thus you can rewrite the above sum and should be able to progress in some inductive manner I believe. Logged Since the number of points is within a constant factor of the number of city quarters, in the long run we can get (4 - ε) ↑↑ n points in n turns for any ε > 0. #### navical • Golem • Offline • Posts: 190 • Respect: +254 ##### Re: Maths thread. « Reply #929 on: November 05, 2017, 10:26:15 am » +1 If you call the original sum S_n then using (n choose k) = (n-1 choose k-1) + (n-1 choose k) you can get a recurrence relation S_n = 2S_{n-1} + 2^{n-1}. This has closed form S_n = (n+2)*2^{n-1} which you can prove by induction. Logged #### heron • Saboteur • Offline • Posts: 1033 • Shuffle iT Username: heron • Respect: +1157 ##### Re: Maths thread. « Reply #930 on: November 05, 2017, 11:35:12 am » +2 The given sum is the number of ways to color a group of n points with red, green, and blue such that at most one point is colored red (think of each term as selecting k points to color green, and then choosing either one of those k points or no point to be red, and coloring the rest blue). The number of ways to color points in that fashion is 2^(n-1) * n + 2^n = 2^(n-1) * (n + 2) as navical computed. The first term is the number of ways if there is a red point, and the second term is the number of ways if there is no red point. Logged #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #931 on: November 05, 2017, 01:05:54 pm » +1 Well, the sum over all (n choose k) is 2^n, I feel like that should be part of the solution. Thus, you can rewrite to 2^n+2^n-(n choose 0)+2^n-(n choose 0)-(n choose 1)+... or in other words n*2^n-sum(k from 0 to n-1) (n-1-k)*(n choose k) I did try that, but I didn't get any further than the above, because I didn't see how to handle the negative sums any better than the original sum. The given sum is the number of ways to color a group of n points with red, green, and blue such that at most one point is colored red (think of each term as selecting k points to color green, and then choosing either one of those k points or no point to be red, and coloring the rest blue). The number of ways to color points in that fashion is 2^(n-1) * n + 2^n = 2^(n-1) * (n + 2) as navical computed. The first term is the number of ways if there is a red point, and the second term is the number of ways if there is no red point. That is a really smart and intuitive explanation. Thanks. It took me a few minutes to figure out how you get from that to the formula but I think I got it. We do 2 cases separately: there is a red point, then first choose that (n) then have 2 choices for any other point (* 2^(n-1)) or there is no red point, then just have 2 choices for each point (+ 2^n). Those cases are disjoint and exhaustive. Logged #### sudgy • Cartographer • Offline • Posts: 3405 • Shuffle iT Username: sudgy • It's pronounced "SOO-jee" • Respect: +2669 ##### Re: Maths thread. « Reply #932 on: November 05, 2017, 03:15:08 pm » 0 Here's another one: Say you have a set S of sets, where each subset has size N.  What is the maximum size of S for a given N such that for all pairs of sets in S, they share exactly one element?  My brother and I have figured out that the upper bound is N * (N - 1) + 1, and have verified that that is the case for N = 1, 2, and 3, but haven't found a way to show that that is the answer. EDIT: Read below, this isn't quite perfect. « Last Edit: November 05, 2017, 06:32:58 pm by sudgy » Logged If you're wondering what my avatar is, watch this. Check out my logic puzzle blog! Quote from: sudgy on June 31, 2011, 11:47:46 pm #### Watno • Margrave • Offline • Posts: 2740 • Shuffle iT Username: Watno • Respect: +2970 ##### Re: Maths thread. « Reply #933 on: November 05, 2017, 04:35:12 pm » +3 I think you missed some restriction in the question, because you can get an arbitrary size for S when N> 1  (by having each s \in S consist of a common element x_0 and further elements x^s_1, ..., x^s_N, where all the x^M_i are different). « Last Edit: November 05, 2017, 04:36:30 pm by Watno » Logged #### sudgy • Cartographer • Offline • Posts: 3405 • Shuffle iT Username: sudgy • It's pronounced "SOO-jee" • Respect: +2669 ##### Re: Maths thread. « Reply #934 on: November 05, 2017, 06:12:39 pm » 0 I think you missed some restriction in the question, because you can get an arbitrary size for S when N> 1  (by having each s \in S consist of a common element x_0 and further elements x^s_1, ..., x^s_N, where all the x^M_i are different). Oh crap, you're right.  Each element needs to be in the same number of sets. EDIT: Now that I think of it, this might not work either.  I feel like saying that each element needs to be in more than one set could work too, but I'm not sure if that stops the infinite answer. « Last Edit: November 05, 2017, 06:32:41 pm by sudgy » Logged If you're wondering what my avatar is, watch this. Check out my logic puzzle blog! Quote from: sudgy on June 31, 2011, 11:47:46 pm #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #935 on: November 11, 2017, 08:57:13 am » 0 which theorem (I'm sure it exists but I can't find it) states that for a general triangle split like this it holds that c^2 = p*(p+q)? Logged #### Watno • Margrave • Offline • Posts: 2740 • Shuffle iT Username: Watno • Respect: +2970 ##### Re: Maths thread. « Reply #936 on: November 11, 2017, 09:10:28 am » 0 I think that statement is false (consider the case where q=0). In the case of a right triangle, it's an easy consequence of the pythagorean theorem and https://en.wikipedia.org/wiki/Geometric_mean_theorem (Höhensatz in German) « Last Edit: November 11, 2017, 09:15:56 am by Watno » Logged #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #937 on: November 11, 2017, 09:33:05 am » 0 aw. I needed it to prove that the triangle is a right triangle and I remembered it from somewhere. well in that case that does not work. thanks. Logged #### heron • Saboteur • Offline • Posts: 1033 • Shuffle iT Username: heron • Respect: +1157 ##### Re: Maths thread. « Reply #938 on: November 12, 2017, 10:38:48 am » 0 Logged #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #939 on: October 15, 2018, 06:06:34 pm » +2 Here's something I've been thinking about again recently So, a while ago I tried to prove that the derivative of sin is cos using the limit calculation, that's lim h->0 [sin(x + h) - sin(x)] / h. After finishing it, I realized that I used the rule of L'hopital, so I used the fact that sin' = cos in order to prove that sin' = cos. And ofc logically speaking, sin' = cos => sin' = cos is a tautology, so it doesn't prove anything. But then I realized that it's actually still pretty strong evidence – certainly it would be strong rational evidence if you didn't know what the derivative of sin was – because if you postulate an incorrect derivative, the same calculation will most likely get you a contradiction. For example, if you postulate that sin(x)' cos(x)' = x, then what you prove is that sin'(x) = cos'(x) = x => sin(x) = 0, which is a contradiction, so it does give you a valid proof that sin(x)' or cos(x)' does not equal x. This made me think that perhaps the correct result is the only result that would not yield a contradiction in this way. If that were true and you could somehow prove it to be true, then the tautological proof of sin' = cos would actually become a legit proof. It turns out that's not true, though, because I found two counterexamples: sin' = cos' = 0 and sin' = sin, cos' = cos both return tautologies rather than contradictions. this keeps being true if you also plug them into the limit calculation for cos(x). But I still suspect that the class of contradictions is quite small. maybe that's wrong. It also made me think about whether this is a formally stateable question. You may not be able to ask "is there another function f such that if sin(x)' = f(x), you get a stable result doing the limit calculation with l'hopital", because if sin(x)' = f(x) for f(x) ≠ cos(x), you would have a contradiction and then everything follows, so it probably would be possible to do the calculation and conclude that sin(x)' = x => sin(x)' = x. This is the general problem of reasoning about logical uncertainty. But at the same time, it is a question that's pretty easy to understand informally. Maybe you could formulate it if you restricted the operations that are allowed, but that sounds weird. Logged #### ConMan • Saboteur • Offline • Posts: 1368 • Respect: +1647 ##### Re: Maths thread. « Reply #940 on: October 15, 2018, 06:27:41 pm » 0 Not quite the same thing, but in one applied maths course I took the lecturer solved a problem, but at one step he pointed out that there are a bunch of conditions you're meant to check before you do it, and said "We can leave that to the pure mathematicians, we'll just get an answer and check that it works". And he did - once he reached an answer, he proved it was a solution to the original problem, and never bothered to confirm that the process he used to reach it was valid. Logged #### infangthief • Moneylender • Offline • Posts: 151 • Shuffle iT Username: infangthief • Respect: +314 ##### Re: Maths thread. « Reply #941 on: October 16, 2018, 04:20:41 am » +1 I think what you're saying here is addressed by the concept of logical completeness. I'm a bit rusty on this, but here's what I recall: A set of axioms may be complete or incomplete. Now consider throwing in a new axiom (say sin' = cos). If the original set of axioms is complete, then the new axiom will either cause a contradiction, or the new axiom will turn out to be derivable from the other axioms. If the original set of axioms is incomplete, then it would be possible for the new axiom to be unprovable and yet not cause a contradiction. So what you're effectively asking is: are my axioms for limit calculation, trig functions, derivatives etc complete? If they are complete, then any result will either be correct or can be used to generate a contradiction. Logged Three different people now have quotes from me as their sigs. I guess I’m quite quotable! #### faust • Margrave • Offline • Posts: 2631 • Shuffle iT Username: faust • Respect: +3682 ##### Re: Maths thread. « Reply #942 on: October 16, 2018, 04:45:19 am » +1 Then of course, for the question of completeness, we need to consider Gödel's incompleteness theorem, which states that any consistent system with arithmetic cannot be complete. The axioms of limit calculation probably need to include arithmetic rules, so your system will be incomplete; thus, it is guaranteed that there are other axioms you could add (though of course it is unclear that any of them will be of the form sin' = f). « Last Edit: October 16, 2018, 09:50:39 am by faust » Logged Since the number of points is within a constant factor of the number of city quarters, in the long run we can get (4 - ε) ↑↑ n points in n turns for any ε > 0. #### pacovf • Cartographer • Offline • Posts: 3431 • Multiediting poster • Respect: +3773 ##### Re: Maths thread. « Reply #943 on: October 16, 2018, 09:43:06 am » +1 I mean, if we are getting to that point, the question is, how do you define “sin(x)”? Logged pacovf has a neopets account.  It has 999 hours logged.  All his neopets are named "Jessica".  I guess that must be his ex. #### fisherman • Steward • Offline • Posts: 27 • Shuffle iT Username: fisherman • Respect: +33 ##### Re: Maths thread. « Reply #944 on: October 16, 2018, 09:56:03 am » 0 This made me think that perhaps the correct result is the only result that would not yield a contradiction in this way. If that were true and you could somehow prove it to be true, then the tautological proof of sin' = cos would actually become a legit proof. This definitely cannot work without an independent argument showing that sin is differentiable. For comparison, you can often argue that if a given sequence approaches a limit, then the limit has a certain value. In some cases, this will in fact be the correct limit, while in others the limit will not actually exist. Logged #### theorel • Spy • Offline • Posts: 86 • Shuffle iT Username: theorel • Respect: +56 ##### Re: Maths thread. « Reply #945 on: October 16, 2018, 10:28:36 am » +3 Here's something I've been thinking about again recently So, a while ago I tried to prove that the derivative of sin is cos using the limit calculation, that's lim h->0 [sin(x + h) - sin(x)] / h. After finishing it, I realized that I used the rule of L'hopital, so I used the fact that sin' = cos in order to prove that sin' = cos. And ofc logically speaking, sin' = cos => sin' = cos is a tautology, so it doesn't prove anything. But then I realized that it's actually still pretty strong evidence – certainly it would be strong rational evidence if you didn't know what the derivative of sin was – because if you postulate an incorrect derivative, the same calculation will most likely get you a contradiction. For example, if you postulate that sin(x)' cos(x)' = x, then what you prove is that sin'(x) = cos'(x) = x => sin(x) = 0, which is a contradiction, so it does give you a valid proof that sin(x)' or cos(x)' does not equal x. The other replies are good for the general question, but this specific question seems to have some interesting points in it. First: note that applying l'Hopital to the general derivative rule produces a tautology: lim h->0 f(x+h)-f(x)/h=lim h->0 f'(x+h)/1=f'(x). If you restrict yourself to using the derivative of sin, then you'll get the tautology.  The reason your problem gets wonky is because you're involving the derivative of cos as well, because of the other steps involved in finding the actual solution. Trying to solve the limit, I assume you used the addition rule for sin, and got to: sin'(x) = lim h->0 (sin(x)*(cos(h)-1)/h)+ (cos(x)*(sin(h)/h)) Now what's interesting here is that the parts of this equation involving the limit are in fact derivatives at 0.  i.e. lim h->0 (cos(h)-1)/h=cos'(0) and lim h->0 sin(h)/h=sin'(0). So, applying l'Hopital here is no longer a tautology, it's in fact not changing the equation at all.  So, we have an equation that tells us a relationship between sin'(x) and sin'(0) and cos'(0).  To simplify the form of the relationship let's suppose sin'(x)=f(x) and cos'(x)=g(x) we get: f(x)=g(0)*sin(x)+f(0)*cos(x). Now, if we evaluate this at 0 we see that f(0)=g(0)*0+f(0)*1=f(0).  So we can choose any values for f(0) and g(0) and it's valid.  So we get the full set of all solutions: f(x)=C*sin(x)+D*cos(x). Note that g(x) does not depend at all on f(x), so it can be whatever you want as long as you choose it first, rather than trying to arbitrarily choose both at the same time. So, in the end it's funny because using l'Hopital on the derivative rule produces a tautology, but using it on this modified form does not produce a tautology, but actually just expresses what's already true.  The "tautology" element comes from the f(0) evaluation above, but the produced equation is actually a pretty tight constraint on the form of sin'(x). « Last Edit: October 16, 2018, 10:32:47 am by theorel » Logged #### gamesou • Apprentice • Offline • Posts: 290 • Respect: +337 ##### Re: Maths thread. « Reply #946 on: November 13, 2018, 07:15:02 am » +3 A quick math riddle involving fruits Logged Designer of Chronos Conquest #### silverspawn • Governor • Offline • Posts: 4318 • Shuffle iT Username: sty.silver • Respect: +1978 ##### Re: Maths thread. « Reply #947 on: November 13, 2018, 09:37:15 am » 0 Disclaimer: the riddle is very hard and the smallest correct numbers have 20+ digits. Far fewer than 5% of all people could solve this. Logged #### Iridium • Swindler • Offline • Posts: 18 • Shuffle iT Username: Iridium • Respect: +7 ##### Re: Maths thread. « Reply #948 on: November 13, 2018, 04:27:04 pm » 0 How would you even start solving it? I haven't graduated high school yet, but I know at least up through Calc II, so I figured I would be qualified to post in this thread. But still, I have no idea how to even start. « Last Edit: November 13, 2018, 06:58:42 pm by Iridium » Logged #### ThetaSigma12 • Torturer • Offline • Posts: 1681 • Shuffle iT Username: ThetaSigma12 • Respect: +1789 ##### Re: Maths thread. « Reply #949 on: November 13, 2018, 04:28:36 pm » 0 Disclaimer: the riddle is very hard and the smallest correct numbers have 20+ digits. Far fewer than 5% of all people could solve this. I took a crack at it. I got (relatively) pretty close with 7, 1, and 1, to 3 and 3/4. Made the numbers bigger and with 23, 3, and 3 I got roughly 4.064, so I assumed you would just keep on using a similar method until it worked. Didn't seem worth taking it to oblivion, and I'm glad I didn't. Logged My magnum opus collection of dominion fan cards is available here! Pages: 1 ... 36 37 [38] 39 40 ... 45  All Page created in 0.083 seconds with 22 queries.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368724584579468, "perplexity": 2098.082782864614}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00192.warc.gz"}
https://www.physicsforums.com/threads/period-frequency-wavelength-and-velocity-of-a-light-wave.726043/
# Period, frequency, wavelength, and velocity of a light wave 1. Dec 1, 2013 ### Violagirl 1. The problem statement, all variables and given/known data A light wave has a frequency of 6 x 1014 Hz. A) What is its period? B) What is its wavelength in a vacuum? C) When the light wave enters water, its velocity decreases to 0.75 times its velocity in vacuum. What happens to the frequency and wavelength? 2. Relevant equations c = λf v = c/n T = 1/f 3. The attempt at a solution I understood how to do parts A and B. For A, since we know the frequency, we can take the equation for period, T to find the answer: T = 1/f = 1/(6 x 1014 Hz = 1.67 x 10-15 sec. For B, we know that c = 3.0 x 108 m/s in a vacuum and we're given the frequency so wavelength is found by taking the equation: λ = c/f = (3.0 x 108 m/s)/(6 x 1014 Hz) = 5 x 10-7 m. For part C, however, I got confused. I believe we can use the equation: v = c/n We're told that velocity decreases by 0.75 times in a vacuum so I think that v then would be: v = 0.75c So from here, we have to relate it to the equation c = λf. So would we use the equation: 0.75c = λf to find wavelength since we already what f is? Can we assume that f does not change but that wavelength would? To get wavelength then, we'd take: 0.75c/f = λ? 2. Dec 1, 2013 ### sandy.bridge Yes, the frequency of the wave will remain constant. 3. Dec 1, 2013 ### Violagirl So for finding the wavelength then, how exactly do you use the information of v being reduced to 0.75 times its original velocity to find the new wavelength in water then? 4. Dec 1, 2013 ### sandy.bridge The new velocity will will be 3/4 the original velocity c. You know what c is, so compute the new velocity. Furthermore, you know what the frequency is. That's two out of three variables. You can solve for lambda. 5. Dec 1, 2013 ### Violagirl Oooh! Ok got it. I wanted to verify my understanding when we were told that the velocity was 0.75 times its original velocity in water. Got an answer of 3.75 x 10^-7 m, which makes sense as an answer. Thanks! 6. Dec 1, 2013 ### sandy.bridge Perfect. Since the velocity decreased by a factor of 3/4, the wavelength does also. This can be verified by multiplying your wavelength result you attained previously. Draft saved Draft deleted Similar Discussions: Period, frequency, wavelength, and velocity of a light wave
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661391973495483, "perplexity": 1085.059373248835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823114.39/warc/CC-MAIN-20171018195607-20171018215607-00498.warc.gz"}
http://en.wikipedia.org/wiki/Equivariant
Equivariant map (Redirected from Equivariant) For equivariance in estimation theory, see Invariant estimator. In mathematics, an equivariant map is a function between two sets that commutes with the action of a group. Specifically, let G be a group and let X and Y be two associated G-sets. A function f : XY is said to be equivariant if f(g·x) = g·f(x) for all gG and all x in X. Note that if one or both of the actions are right actions the equivariance condition must be suitably modified: f(x·g) = f(xg ; (right-right) f(x·g) = g−1·f(x) ; (right-left) f(g·x) = f(xg−1 ; (left-right) Equivariant maps are homomorphisms in the category of G-sets (for a fixed G). Hence they are also known as G-maps or G-homomorphisms. Isomorphisms of G-sets are simply bijective equivariant maps. The equivariance condition can also be understood as the following commutative diagram. Note that $g\cdot$ denotes the map that takes an element $z$ and returns $g\cdot z$. Intertwiners A completely analogous definition holds for the case of linear representations of G. Specifically, if X and Y are the representation spaces of two linear representations of G then a linear map f : XY is called an intertwiner of the representations if it commutes with the action of G. Thus an intertwiner is an equivariant map in the special case of two linear representations/actions. Alternatively, an intertwiner for representations of G over a field K is the same thing as a module homomorphism of K[G]-modules, where K[G] is the group ring of G. Under some conditions, if X and Y are both irreducible representations, then an intertwiner (other than the zero map) only exists if the two representations are equivalent (that is, are isomorphic as modules). That intertwiner is then unique up to a multiplicative factor (a non-zero scalar from K). These properties hold when the image of K[G] is a simple algebra, with centre K (by what is called Schur's Lemma: see simple module). As a consequence, in important cases the construction of an intertwiner is enough to show the representations are effectively the same. Categorical description Equivariant maps can be generalized to arbitrary categories in a straightforward manner. Every group G can be viewed as a category with a single object (morphisms in this category are just the elements of G). Given an arbitrary category C, a representation of G in the category C is a functor from G to C. Such a functor selects an object of C and a subgroup of automorphisms of that object. For example, a G-set is equivalent to a functor from G to the category of sets, Set, and a linear representation is equivalent to a functor to the category of vector spaces over a field, VectK. Given two representations, ρ and σ, of G in C, an equivariant map between those representations is simply a natural transformation from ρ to σ. Using natural transformations as morphisms, one can form the category of all representations of G in C. This is just the functor category CG. For another example, take C = Top, the category of topological spaces. A representation of G in Top is a topological space on which G acts continuously. An equivariant map is then a continuous map f : XY between representations which commutes with the action of G.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977307915687561, "perplexity": 294.33018889207176}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936459513.8/warc/CC-MAIN-20150226074059-00238-ip-10-28-5-156.ec2.internal.warc.gz"}
https://bt.gateoverflow.in/148/gate2013-18
If $u=\log (e^x+e^y),$ then $\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y}=$ 1. $e^x+e^y$ 2. $e^x-e^y$ 3. $\frac{1}{e^x+e^y}$ 4. $1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337137341499329, "perplexity": 50.089366133246834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00346.warc.gz"}
https://www.physicsforums.com/threads/poissons-ratio-and-ultrasonic-velocity-for-isotropic-material.146591/
# Homework Help: Poisson's Ratio and Ultrasonic Velocity for Isotropic Material 1. Dec 4, 2006 ### dsanyal For an isotropic material, the relation between the longitudinal ultrasonic velocity(VL), the transverse (shear) ultrasonic velocity(VT) and the Poisson's ratio (nu) is given by (VT/VL)^2 = (1-2*nu)/(2*(1-nu)) From the above relation, one gets that VL=0 when nu=1 which is not physically acceptable as nu varies between -1 and 0.5 for an isotropic material. On the contrary when nu=0, VT/VL=sqrt(0.5). However, when nu=0, what happens to the longitudinal ultrasonic velocity. Does VL becomes zero when nu beccomes zero ? 2. Dec 4, 2006 ### AlephZero "Does VL becomes zero when nu beccomes zero?" No. Why did you think it might be zero? 3. Dec 5, 2006 ### dsanyal Poisson's ratio (nu) zero means that there will be no lateral deformation of the material. In that case, if VL exists, the material will also have a longitudinal elastic modulus. Under extension, the volume of the material will tend to increase without any lateral contraction. This can only happen if simultaneously density change occurs ? Is it feasible. 4. Dec 5, 2006 ### AlephZero The transverse wave is a shear wave. The volume of material doesn't change in shear, for any value of nu. The shear modulus G = E/2(1+nu) is not zero when nu is zero. Velocity of VL = sqrt(E/rho), velocity of VT = sqrt(G/rho). There are materials where (in some specific coordinate system) E is non zero and G is zero, but they are not isotropic. A piece of woven cloth is one example. G = 0 in a coordinate system lined up with the fibres of the cloth. If you think of these waves as stress waves, not "displacement" waves, the names make more sense. For the VL wave, there is a lateral displacement (expansion and contraction) when nu is non-zero, but there is no lateral stress. For the VT wave there is transverse shear stress, but no longtitudinal stress. 5. Dec 5, 2006 ### dsanyal Thanks. I am still confused. For an isotropic material, when nu=0, if E and G both are non-zero, then under and the material volume should increase as it cannot have lateral contraction. This can happen only when if the density decreases. Is this physcially tenable ? If yes, then decreasing density should imply increase in VL, otherwise elastic instability will occur. 6. Dec 5, 2006 ### AlephZero The conventional way to define density for solids undergoing small strains is relative to the unstrained condition of the material - i.e. the density is (by definition) always constant, even for problems involving temperature change and thermal expansion. The relevance of this definition is that when you set up the equations of motion you are considering the motion of a fixed piece of material (e.g a small box size dx.dy.dz) and the mass of that material is constant. This method is called the Lagrangian formulation of the equations. You can indeed formulate the equations of motion by considering a fixed region in space instead of a fixed piece of material. Then, material with varying density moves in and out of that region during the motion. This is called the Euleran formulation and it's often used in fluid mechanics, but rarely in solid mechanics. The results of both formulations are identical because they both use the same Newtonian mechanics, the same constitutive equations for the material, etc. There is no advantage in using an Euleran formulation for solid mechanics unless you have a problem involving large strains and/or displacements (e.g. plastic deformation of ductile materials). To summarize, it's possible to formulate the equations of motion correctly with your "variable density" definition, but you would then have a different form of wave equation, and its physical solution would still be the same as the Lagrangian equations. Finally, there are things called ALE formulations (Adaptive Lagrangian-Euleran) which combine both approaches - try a literature search for keywords ALE, DYNA3D (a computer program), and John Hallquist (its author) for more on those. Hope this helps. Last edited: Dec 5, 2006
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706228971481323, "perplexity": 1739.1210643420122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00463.warc.gz"}
https://nbhmcsirgate.theindianmathematician.com/2020/04/csir-june-2011-part-c-question-74_14.html
### CSIR JUNE 2011 PART C QUESTION 74 SOLUTION (Let $T: \Bbb R^n \to \Bbb R^n$ be a linear transformation such that $T^2 = \lambda T$) Let $T: \Bbb R^n \to \Bbb R^n$ be a linear transformation such that $T^2 = \lambda T$ for some $\lambda \in \Bbb R$. Then 1)$||T(x)|| = |\lambda| ||x||$ for all $x \in \Bbb R^n$, 2)If $||T(x)|| = ||x||$ for some non-zero vector $x \in \Bbb R^n$ then $\lambda = \pm 1$, 3) $T = \lambda I$ where $I$ is the $n \times n$ identity matrix, 4) If $||T(x)|| > ||x||$ for some non-zero vector $x \in \Bbb R^n$ then $T$ is necessarily singular. I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me. Visit my blog every day and share the solutions with friends. Follow the blog by email. Thank you. Solution: Consider the $2 \times 2$ nilpotent matrix $$\begin{bmatrix}0&1\\0&0\end{bmatrix}.$$ We have $A^2 = 0$ and hence this matrix satisfies the given condition $A^2 = \lambda T$ with $\lambda = 0$. Since $A$ is upper diagonal its eigen values are its diagonal entries. Hence the eigen values of $A$ are $0,0$. option 1: (False) We use the above matrix $A$ as a counter example. Let $x = \begin{bmatrix}0\\1\end{bmatrix}$ then $Ax = y$ where $y = \begin{bmatrix}1\\0\end{bmatrix}$. Now, $$||Ax|| = ||y|| = \sqrt{1^2 + 0^2} = 1.$$ But $|\lambda| ||x|| = 0$. This shows that, for our matrix $A$, $$||Ax|| \ne |\lambda| ||x||.$$ option 3: (False) We again use the above matrix $A$. We ave seen that this matrix satisfies $A^2 = \lambda A$ with $\lambda =0$. We observe that $A$ is a non-zero matrix whereas $\lambda I = 0I$ is the zero matrix. Hence $A \ne \lambda I$ in this case. option 4: (False) Consider the matrix $$A = \begin{bmatrix}2&0\\ 0&2\end{bmatrix}.$$ Let $x = \begin{bmatrix}0 \\ 1\end{bmatrix}$ then $$||Ax|| = ||\begin{bmatrix}0\\ 2\end{bmatrix}|| = \sqrt{0^2+2^2} = 2>0$$ but $A$ is clearly invertible (non-singular). option 2: (False) Consider the matrix $$\begin{bmatrix}\sqrt 2 & 0 \\ 0 & 0\end{bmatrix},$$ then $A^2 = \sqrt 2 A$. Hence this matrix satisfies $A^2 = \lambda A$ with $\lambda = \sqrt 2$. Let $x = \begin{bmatrix}1\\ 1\end{bmatrix}$ then $$||x|| = \sqrt{1^2+1^2} = \sqrt 2 = ||\begin{bmatrix}\sqrt 2 \\ 0\end{bmatrix}|| = ||Ax||.$$ But $\lambda = \sqrt 2 \ne \pm 1$. All the options are false. This question was wrong and the full mark was given to everybody. ### NBHM 2020 PART A Question 4 Solution $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$ Evaluate : $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$ Solution : \int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx = \int_{-\infty}^{\inft...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877274632453918, "perplexity": 306.6295893885523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00445.warc.gz"}
https://www.physicsforums.com/threads/two-particles-spin-hamiltonian.920638/
# Two particles' spin Hamiltonian 1. Jul 20, 2017 ### cacofolius 1. The problem statement, all variables and given/known data Hi, I'm trying to familiarize with the bra-ket notation and quantum mechanics. I have to find the hamiltonian's eigenvalues and eigenstates. $H=(S_{1z}+S_{2z})+S_{1x}S_{2x}$ 2. Relevant equations $S_{z} \vert+\rangle =\hbar/2\vert+\rangle$ $S_{z}\vert-\rangle =-\hbar/2\vert-\rangle$ $S_{x} \vert+\rangle =\hbar/2\vert-\rangle$ $S_{x} \vert-\rangle =\hbar/2\vert+\rangle,$ The states basis is $\vert++\rangle,\vert+-\rangle, \vert-+\rangle, \vert--\rangle$ 3. The attempt at a solution What I did was apply the hamiltonian to each basis ket $H\vert++\rangle =(S_{1z}+S_{2z})\vert++\rangle + S_{1x}S_{2x}\vert++\rangle = \hbar/2\vert++\rangle + \hbar/2\vert++\rangle + \hbar/2\vert-+\rangle . \hbar/2\vert+-\rangle = \hbar/2\vert++\rangle$ $H\vert+-\rangle = 0$ $H\vert-+\rangle = 0$ $H\vert--\rangle = -\hbar/2\vert--\rangle$ My questions: 1) Is it right to consider $\vert-+\rangle . \vert+-\rangle = 0$, (since they're orthogonal states)? Because they're both ket vectors (unlike the more familiar $<a|b>$). 2) In that case, is the basis also the hamiltonian's, with eigenvalues $\hbar/2, -\hbar/2, 0$ (degenerate) ? 2. Jul 20, 2017 ### gimak Post this in the advanced physics homework section 3. Jul 24, 2017 ### blue_leaf77 No, that's not right. Moreover, $S_{1x}S_{2x}|++\rangle \neq \hbar/2\vert-+\rangle . \hbar/2\vert+-\rangle$. It's like you are producing four electrons out of two electrons. The operator of the first particle only acts on the first entry of the ket and that of the second particle acts on the second entry. Draft saved Draft deleted Similar Discussions: Two particles' spin Hamiltonian
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576717019081116, "perplexity": 1360.768457368946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550986.47/warc/CC-MAIN-20171214202730-20171214222730-00093.warc.gz"}
http://math.stackexchange.com/questions/32239/square-root-of-a-matrix
# Square Root of a Matrix From a problem set I'm working on: (Edit 04/11 - I fudged a sign in my matrix...) Let $A(t) \in M_3(\mathbb{R})$ be defined: $$A(t) = \left( \begin{array}{crc} 1 & 2 & 0 \\ 0 & -1 & 0 \\ t-1 & -2 & t \end{array} \right).$$ For which $t$ does there exist a $B \in M_3(\mathbb{R})$ such that $B^2 = A$? In a previous part of the problem, I showed that $A(t)$ could be diagonalized into a real diagonal matrix for all $t \in \mathbb{R}$, with eigenvalues $1,-1,t$. A few things I've thought of: • The matrix is not positive-semidefinite, so the general form of the square root does not work. (Is positive-definiteness a necessary condition for the existence of a square root?) • Since $A = B^2$, then $\det(B^2) = (\det B)^2 = \det A$. So $\det A \geq 0$ for there to be a real-valued square root, forcing $t \leq 0$ to be necessary. • My professor suggested that, since $B^2$ fits the characteristic polynomial of $A$, $\mu_A(x) = (x-1)(x+1)(x-a)$, then the minimal polynomial of $B$ must divide $\mu_A(x^2) = (x^2-1)(x^2+1)(x^2-a) = (x-1)(x+1)(x^2+1)(x^2-a)$. Examining the possible minimal polynomials, one can find the rational canonical form, square it, and check whether the eigenvalues match. This probably could get me the right answer, but I am fairly sure that there is an alternative to a "proof by exhaustion". - You should include into the formulation of the problem that $t\in\mathbb R$ and B has real values. (You did not write this explicitly, but from what you tried to get the solutions, I am guessing you want to work in reals.) –  Martin Sleziak Apr 11 '11 at 6:18 About your first bullet point: You probably meant positive-semidefiniteness? That's not a necessary condition for the existence of a square root -- a counterexample is $$\left(\begin{array}{cc}1&0\\2c&1\end{array}\right)$$ with $|c|>1$, with square root $$\left(\begin{array}{cc}1&0\\c&1\end{array}\right)\;.$$ –  joriki Apr 11 '11 at 7:23 About your second bullet point: You probably meant $t\le0$? –  joriki Apr 11 '11 at 7:27 $B$ cannot be real, as its eigenvalues must be $\pm i$, $\pm 1$, $\pm\sqrt{t}$ (one choice for each $\pm\mathit{something}$). If $B$ were real then its eigenvalues would have to be real or in complex conjugate pairs - your only chance would be for $t=-1$ (I didn't check, so I don't know whether then a real $B$ exists). –  user8268 Apr 11 '11 at 8:11 @Martin and @joriki: Thank you, I corrected the problem statement. –  Michael Chen Apr 11 '11 at 14:56 Assume that there exists a real number $t$ and a real matrix $B$ such that $A(t)=B^2$. Note that $-1$ is an eigenvalue of $A(t)$, hence $A(t)+I=(B-\mathrm{i}I)(B+\mathrm{i}I)$ is singular. This implies that $B-\mathrm{i}I$ or $B+\mathrm{i}I$ is singular. Since $B$ is real valued, this means that both $B-\mathrm{i}I$ and $B+\mathrm{i}I$ are singular. Likewise, $1$ is an eigenvalue of $A(t)$, hence $A(t)-I=(B-I)(B+I)$ is singular. This implies that $B-I$ or $B+I$ is singular. Hence the eigenvalues of $B$ are $\{\mathrm{i},-\mathrm{i},1\}$ or $\{\mathrm{i},-\mathrm{i},-1\}$. In both cases, $B$ has three distinct eigenvalues hence $B$ is diagonalizable on $\mathbb{C}$. This implies that the eigenvalues of $A(t)$ are $-1$ (twice) and $1$ (once) and that $A(t)$ is diagonalizable as well. Hence $t=-1$. We now look at the matrix $A(-1)$. One can check that $A(-1)$ is diagonalizable hence $A(-1)$ is similar to a diagonal matrix with diagonal $(1,-1,-1)$. Both $I_1$ (the $1\times1$ matrix with coefficient $1$) and $-I_2$ (the $2\times2$ diagonal matrix with diagonal coefficients $-1$) have square roots: take $I_1$ for $I_1$ and the rotation matrix $\begin{pmatrix}0 & 1 \\ -1 & 0\end{pmatrix}$ for $-I_2$. Hence $A(-1)$ is a square. Finally $A(t)$ is a square if and only if $t=-1$. - I believe I follow. I was a little confused because I showed in a previous part that $A(t)$ is diagonalizable in $M_2(\mathbb{R})$ for all real $t$, and then I realized I missed a sign in my original problem! So I'm trying to continue where you left off. –  Michael Chen Apr 12 '11 at 2:31 If $A(-1)$ is diagonalizable, then I can construct a matrix $B$ with eigenvalues $1,i,-i$ such that $B^2 = D$, where $D = P^{-1}AP$ is the diagonalization of $A$. If I use change of basis, then I can find $PBP^{-1}$ and see if that is real. Is that the way to go about doing it? –  Michael Chen Apr 12 '11 at 2:36 No that is not. The matrix $M=A(-1)$ is defective because the algebraic multiplicity of the eigenvalue $-1$ is $2$ while its geometric multiplicity is $1$. See en.wikipedia.org/wiki/Generalized_eigenspace –  Did Apr 12 '11 at 5:22 In the corrected matrix (where the 3,2 entry has value -2, not 2 -- my fault), I believe $A(-1)$ does have a full basis of eigenvectors: $(0,0,1)$ and $(1,-1,0)$ associated with $\lambda = -1$, and $(1,0,1)$ associated with $\lambda = 1$. –  Michael Chen Apr 12 '11 at 5:56 See edit. –  Did Apr 12 '11 at 6:08 A common exercise is to first diagonalize a matrix with positive eigenvalues, and then find a "square root" for the matrix as you have been asked to do. This is trivial since if $A=PDP^{-1}$ and $B=P\sqrt{D}P^{-1}$ then $A=B^2$ where $\sqrt{D}$, of course, denotes the diagional matrix D after taking the square root of it's entries (which is why we need positive eigenvalues if you are working with real matricies). Thus your question follows from what you have already determained about diagionalizing the matrix. A not so trivial result is to prove that a matrix with positive eigenvalues has a square root regardless of whether or not it can be diagonalized. - If I understand you correctly, then the triangular Pascal-matrix is an example for this. You can easily define a squareroot but if you try to diagonalize you get zero-eigenvectors. –  Gottfried Helms Apr 11 '11 at 8:08 so if I understand you correctly, there is no square root precisely because the eigenvalues are not nonnegative? –  Michael Chen Apr 11 '11 at 14:53 This answer is irrelevant: the eigenvalues are not positive. –  Plop Apr 11 '11 at 15:49 Any invertible matrix has a square root over the complex numbers. A real matrix having a negative eigenvalue with odd multiplicity (or more generally an odd number of Jordan blocks of some size) has no real square root. Some non-invertible matrices have no square root. –  Robert Israel Apr 12 '11 at 2:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639191031455994, "perplexity": 175.759046677007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986451.45/warc/CC-MAIN-20150728002306-00295-ip-10-236-191-2.ec2.internal.warc.gz"}