url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/calculus/217206-derivative-arcsec-function.html
# Math Help - The derivative of the arcsec function 1. ## The derivative of the arcsec function I have got a funny result for a problem: given the function $f(x) = 5arcsec (6x)$ by differentiating I got $f'(x) = \frac{5}{6x*sqrt((6x)^2 - 1)} *{6} = \frac{30}{6x*sqrt(36x^2 - 1)}$ and the answer is considered a wrong one... , but by calculating f(5) for what I obtained gave me a correct answer, is that a coincidence or I miss something? Thanks 2. ## Re: The derivative of the arcsec function That "6x" is wrong. You are using the chain rule but you need to multiply by the derivative of 6x, 6, not the 6x itself. 3. ## Re: The derivative of the arcsec function do you mean under the square root too, or not... the right answer would be like this $f'(x) = \frac{5}{x^2*sqrt(36 - (1/x^2))}$ is it a kind of factorization, and why do I have to take the x^2 from the square root? Thanks 4. ## Re: The derivative of the arcsec function Originally Posted by dokrbb I have got a funny result for a problem: given the function $f(x) = 5arcsec (6x)$ by differentiating I got $f'(x) = \frac{5}{6x*sqrt((6x)^2 - 1)} *{6} = \frac{30}{6x*sqrt(36x^2 - 1)}$ and the answer is considered a wrong one... , but by calculating f(5) for what I obtained gave me a correct answer, is that a coincidence or I miss something? Thanks \displaystyle \begin{align*} y &= 5\,\textrm{arcsec}\,{(6x)} \\ \frac{y}{5} &= \textrm{arcsec}\,{(6x)} \\ \sec{\left( \frac{y}{5} \right) } &= 6x \\ \left[ \cos{\left( \frac{y}{5} \right) } \right] ^{-1} &= 6x \\ \frac{d}{dx} \left\{ \left[ \cos{ \left( \frac{y}{5} \right) } \right] ^{-1} \right\} &= \frac{d}{dx} \left( 6x \right) \\ \frac{1}{5} \left[ -\sin{ \left( \frac{y}{5} \right) } \right] \left\{ -\left[ \cos{ \left( \frac{y}{5} \right) } \right] ^{-2} \right\} \frac{dy}{dx} &= 6 \\ \frac{1}{5} \tan{ \left( \frac{y}{5} \right) } \sec{ \left( \frac{y}{5} \right) } \, \frac{dy}{dx} &= 6 \\ \frac{1}{5} \, \sqrt{ \sec ^2{ \left( \frac{y}{5} \right) } -1 } \, \sec{ \left( \frac{y}{5} \right) } \,\frac{dy}{dx} &= 6 \\ \frac{1}{5} \, \sqrt{ \left( 6x \right) ^2 - 1 } \left( 6x \right) \frac{dy}{dx} &= 6 \\ \frac{dy}{dx} &= \frac{30}{6x\,\sqrt{ 36x^2 - 1 }} \\ \frac{dy}{dx} &= \frac{5}{x\,\sqrt{ 36x^2 - 1 }} \end{align*} I completely agree with the answer that you gave. 5. ## Re: The derivative of the arcsec function Originally Posted by HallsofIvy That "6x" is wrong. You are using the chain rule but you need to multiply by the derivative of 6x, 6, not the 6x itself. The OP DID multiply by the 6 you are referring to, it's after the first fraction... 6. ## Re: The derivative of the arcsec function Originally Posted by Prove It $\frac{dy}{dx} &= \frac{30}{6x\,\sqrt{ 36x^2 - 1 }} \\ \frac{dy}{dx} &= \frac{5}{x\,\sqrt{ 36x^2 - 1 }}$ I completely agree with the answer that you gave. thanks a lot for this detailed solution, although, the program we are entering our answers "asked" me imperatively to do a step further and factor x^2 under the radical; it considered this answer as not correct until I did the following: $\frac{dy}{dx} &= \frac{5}{x^2\,\sqrt{ 36 - (1/x^2) }}$ IMO it's kind of useless... maybe there is a reason(I'm serious, actually - it's not sarcasm), from a deep mathematical point of view and me, as a novice, I am not aware of... what do you think? 7. ## Re: The derivative of the arcsec function Originally Posted by dokrbb thanks a lot for this detailed solution, although, the program we are entering our answers "asked" me imperatively to do a step further and factor x^2 under the radical; it considered this answer as not correct until I did the following: $\frac{dy}{dx} &= \frac{5}{x^2\,\sqrt{ 36 - (1/x^2) }}$ IMO it's kind of useless... maybe there is a reason(I'm serious, actually - it's not sarcasm), from a deep mathematical point of view and me, as a novice, I am not aware of... what do you think? I think the version your program wants is uglier, bringing in more fractions for no apparent reason... It's also wrong considering that $\displaystyle \sqrt{x^2} = |x|$, NOT x.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000035762786865, "perplexity": 1664.192066037881}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776428349.3/warc/CC-MAIN-20140707234028-00000-ip-10-180-212-248.ec2.internal.warc.gz"}
http://www.zora.uzh.ch/id/eprint/125107/
# Local polynomial regression: optimal kernels and asymptotic minimax efficiency Fan, Jianqing; Gasser, Theo; Gijbels, Irène; Brockmann, Michael; Engel, Joachim (1997). Local polynomial regression: optimal kernels and asymptotic minimax efficiency. Annals of the Institute of Statistical Mathematics, 49(1):79-99. ## Abstract We consider local polynomial fitting for estimating a regression function and its derivatives nonparametrically. This method possesses many nice features, among which automatic adaptation to the boundary and adaptation to various designs. A first contribution of this paper is the derivation of an optimal kernel for local polynomial regression, revealing that there is a universal optimal weighting scheme. Fan (1993, Ann. Statist., 21, 196-216) showed that the univariate local linear regression estimator is the best linear smoother, meaning that it attains the asymptotic linear minimax risk. Moreover, this smoother has high minimax risk. We show that this property also holds for the multivariate local linear regression estimator. In the univariate case we investigate minimax efficiency of local polynomial regression estimators, and find that the asymptotic minimax efficiency for commonly-used orders of fit is 100% among the class of all linear smoothers. Further, we quantify the loss in efficiency when going beyond this class. ## Abstract We consider local polynomial fitting for estimating a regression function and its derivatives nonparametrically. This method possesses many nice features, among which automatic adaptation to the boundary and adaptation to various designs. A first contribution of this paper is the derivation of an optimal kernel for local polynomial regression, revealing that there is a universal optimal weighting scheme. Fan (1993, Ann. Statist., 21, 196-216) showed that the univariate local linear regression estimator is the best linear smoother, meaning that it attains the asymptotic linear minimax risk. Moreover, this smoother has high minimax risk. We show that this property also holds for the multivariate local linear regression estimator. In the univariate case we investigate minimax efficiency of local polynomial regression estimators, and find that the asymptotic minimax efficiency for commonly-used orders of fit is 100% among the class of all linear smoothers. Further, we quantify the loss in efficiency when going beyond this class. ## Statistics ### Citations Dimensions.ai Metrics 35 citations in Web of Science® 40 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751174807548523, "perplexity": 1126.2730009285103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589902.8/warc/CC-MAIN-20180717203423-20180717223423-00475.warc.gz"}
https://mathematics.huji.ac.il/event/small-gaps-between-primes
# Small gaps between primes ## Lecturer: James Maynard (Université de Montréal) It is believed that there should be infinitely many pairs of primes which differ by 2; this is the famous twin prime conjecture. More generally, it is believed that for every positive integer m there should be infinitely many sets of m primes, with each set contained in an interval of size roughly m log{m}. We will introduce a refinement of the GPY sieve method for studying these problems. This refinement will allow us to show (amongst other things) that liminf_n(p_{n+m}-p_n)<∞ for any integer m, and so there are infinitely many bounded length intervals containing m primes. ## Date: Thu, 01/05/2014 - 14:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065046310424805, "perplexity": 393.73324111381936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00292.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-essence-of-universe-energy-or-matter.77412/
# What is the essence of universe: energy or matter? 1. May 30, 2005 ### wangasu what is the essence of the universe: energy or matter? Who can tell me the relationship of matter (mass) and energy? it seems that they are not independent. as I think about this question, others comes up as to whether matter can be separated endless, and what the elementary particles are. so far, we know that quark is the smallest particle. but, can we assert quark will be able to stand with stronger high-energy bombardment, which is inaccessible today? i have a feeling that eventually elementary particle might be smallest energy unit? according to this, it seems that matter is energy in nature. is this true? Thanks. Last edited: May 30, 2005 2. May 30, 2005 Staff Emeritus In the rest frame of any particle, its energy is given by $$E = mc^2$$, where E is its energy, m is its mass, and c is the speed of light. If you're not familiar with the notation, the two things on the right of the equals sign are multiplied together, mass times the square of the speed of light. For any other unaccelerated observer moving with velocity v relative to the particle (or seeing the particle move with velocity v), its energy is $$E_v = \gamma mc^2$$, where $$\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$$. The factor $$\gamma$$ allows both for the fact that the particle has a momentum in the observer's frame of reference, and also for the relativistic change to the energy because of the speed difference. Last edited: May 30, 2005 3. May 30, 2005 ### wangasu Thank you, selfAdjoint.. Einstein's mass-energy equation just tells us how the REST energy is related to mass. we also know that there are other forms of energy like phonon, electromagnetic waves which are mass independent. Although these kinds of energy can be absorbed by matter so as to change the state of matter, it is not substantiated after all. so, can I say that matter is the carrier of SOME of energy, or, to some extent, matter is partial manifestation of the total energy in the universe? Last edited: May 31, 2005 4. May 31, 2005 ### hexhunter the Quark is far larger than the Electron, i think, i have only known such particles to be described through their mass not their 'size' 5. May 31, 2005 ### nightcleaner Hi Wangasu Einstein's famous formula tells us that mass and energy are the same thing, related by the speed of light, which is a constant. The gamma term in the second equation selfAdjoint gave you tells us how the observed mass of a body changes with velocity, as measured by the observer. It may be a mistake to think of mass as the carrior of energy. It is not known what mass really is, but I for one am almost certain it does not consist of tiny little balls of hard heavy stuff. Many students of this question have found it best to study the math and not get too involved in thinking about what it "really" is. How does it behave? That is something we can measure and talk about meaningfully. Hi, Hexhunter. Your idea that quarks are bigger than electrons may have no meaning at such tiny scales. Heisenburg uncertainty principle makes such comparisons useless. Thanks for being here, Richard 6. May 31, 2005
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663909673690796, "perplexity": 438.64445686333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215393.63/warc/CC-MAIN-20180819204348-20180819224348-00669.warc.gz"}
https://www.santafe.edu/research/results/papers/7312-synthetic-circuits-reveal-how-mechanisms-of-g
#### Schaerli, Y; Jimenez, A; Duarte, JM; Mihajlovic, L; Renggli, J; Isalan, M; Sharpe, J; Wagner, A Phenotypic variation is the raw material of adaptive Darwinian evolution. The phenotypic variation found in organismal development is biased towards certain phenotypes, but the molecular mechanisms behind such biases are still poorly understood. Gene regulatory networks have been proposed as one cause of constrained phenotypic variation. However, most pertinent evidence is theoretical rather than experimental. Here, we study evolutionary biases in two synthetic gene regulatory circuits expressed in Escherichia coli that produce a gene expression stripe-a pivotal pattern in embryonic development. The two parental circuits produce the same phenotype, but create it through different regulatory mechanisms. We show that mutations cause distinct novel phenotypes in the two networks and use a combination of experimental measurements, mathematical modelling and DNA sequencing to understand why mutations bring forth only some but not other novel gene expression phenotypes. Our results reveal that the regulatory mechanisms of networks restrict the possible phenotypic variation upon mutation. Consequently, seemingly equivalent networks can indeed be distinct in how they constrain the outcome of further evolution.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431954026222229, "perplexity": 4929.3782681188195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00719.warc.gz"}
https://geometriadominicana.blogspot.com/2022/06/
## domingo, 19 de junio de 2022 ### $\sum_{cyc}\tan\frac\alpha2\tan\frac\beta2\geq4$ for a cyclic quadrilateral Let $ABCD$ be a cyclic quadrilateral with sides $a$, $b$, $c$ and $d$. Denote $s$ the semiperimeter and let $\angle{DAB}=\alpha$, $\angle{ABC}=\beta$, $\angle{BCD}=\gamma$ and $\angle{CDA}=\delta$. Then the following inequality holds $$\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}+\tan{\frac{\beta}{2}}\tan{\frac{\gamma}{2}}+\tan{\frac{\gamma}{2}}\tan{\frac{\delta}{2}}+\tan{\frac{\delta}{2}}\tan{\frac{\alpha}{2}}\geq4.\tag{1}$$ Proof. Substituting from the half-angle formula for the tangent we have that $$\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}=\sqrt{\frac{(s-a)(s-d)}{(s-b)(s-c)}}\cdot{\sqrt{\frac{(s-a)(s-b)}{(s-c)(s-d)}}}=\frac{s-a}{s-c}.$$ Similarly, $$\tan{\frac{\beta}{2}}\tan{\frac{\gamma}{2}}=\frac{s-b}{s-d}\qquad\tan{\frac{\gamma}{2}}\tan{\frac{\delta}{2}}=\frac{s-c}{s-a}\qquad\tan{\frac{\delta}{2}}\tan{\frac{\alpha}{2}}=\frac{s-d}{s-b}$$ Thus, the left-hand side of $(1)$ can be rewritten as follows $$\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}+\tan{\frac{\beta}{2}}\tan{\frac{\gamma}{2}}+\tan{\frac{\gamma}{2}}\tan{\frac{\delta}{2}}+\tan{\frac{\delta}{2}}\tan{\frac{\alpha}{2}}=\frac{s-a}{s-c}+\frac{s-b}{s-d}+\frac{s-c}{s-a}+\frac{s-d}{s-b}.$$ But, $$\frac{s-a}{s-c}+\frac{s-b}{s-d}+\frac{s-c}{s-a}+\frac{s-d}{s-b}=\frac{a-c}{s-a}+\frac{b-d}{s-b}+\frac{c-a}{s-c}+\frac{d-b}{s-d}+4.\tag{2}$$ Since $\frac{a-c}{s-a}+\frac{c-a}{s-c}=\frac{4(a-c)^2}{(-a+b+c+d)(a+b-c+d)}$, and similarly for $\frac{b-d}{s-b}+\frac{d-b}{s-d}$, then $\frac{a-c}{s-a}+\frac{b-d}{s-b}+\frac{c-a}{s-c}+\frac{d-b}{s-d}$ is positive. Hence, $$\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}+\tan{\frac{\beta}{2}}\tan{\frac{\gamma}{2}}+\tan{\frac{\gamma}{2}}\tan{\frac{\delta}{2}}+\tan{\frac{\delta}{2}}\tan{\frac{\alpha}{2}}\geq4.$$ $\square$ Notice equality holds when $ABCD$ is rectangular. From $(2)$ it also follows that $$\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}+\tan{\frac{\beta}{2}}\tan{\frac{\gamma}{2}}+\tan{\frac{\gamma}{2}}\tan{\frac{\delta}{2}}+\tan{\frac{\delta}{2}}\tan{\frac{\alpha}{2}}>\frac{a-c}{s-a}-\frac{a-c}{s-c}+\frac{b-d}{s-b}-\frac{ b-d}{s-d}.$$ A huge list of inequalities can be seen at Cut-the-knot.org.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954704642295837, "perplexity": 455.26471560827935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00356.warc.gz"}
https://darkmatterdarkenergy.com/tag/xmm-newton/
# Tag Archives: XMM-Newton ## Does Dark Energy Vary with Time? Einstein introduced the concept of dark energy 100 years ago. The Concordance Lambda-Cold Dark Matter cosmology appears to fit observations of the cosmic microwave background and other cosmological observations including surveys of large-scale galaxy grouping exceedingly well. In this model, Lambda is shorthand for the dark energy in the universe. It was introduced as the greek letter Λ into the equations of general relativity, by Albert Einstein, as an unvarying cosmological constant. Measurements of Λ indicate that dark energy accounts for about 70% of the total energy content of the universe. The remainder is found in dark matter and ordinary matter, and about 5/6 of that is in the form of dark matter. Alternative models of gravity, with extra gravity in very low acceleration environments, may replace apparent dark matter with this extra gravity, perhaps due to interaction between dark energy and ordinary matter. The key point about dark energy is that while it has a positive energy, it rather strangely has a negative pressure. In the tensor equations of general relativity the pressure terms act as a negative gravity, driving an accelerated expansion of the universe. In fact our universe is headed toward a state of doubling in scale in each dimension every 11 or 12 billion years. In the next trillion years we are looking at 80 or 90 such repeated doublings. That assumes that dark energy is constant per volume over time, with a value equivalent to two proton – antiproton pair annihilations per cubic meter (4 GeV / m³). But is it? The Dark Energy Survey results seem to say so. This experiment looked at 26 million galaxies for the clustering patterns, and also gravitational lensing (Einstein taught us that mass bends light paths). They determined the parameter w for dark energy and found it to be consistent with -1.0 as expected for the cosmological constant model of unvarying dark energy. See this blog for details: https://darkmatterdarkenergy.com/2017/08/10/dark-energy-survey-first-results-canonical-cosmology-supported/ The pressure – energy density relation is: $P = w \cdot \rho \cdot c^2$ The parameter w elucidates the relation between the energy density given by ρ and the pressure P. This is called the equation of state. Matter and radiation have w >= 0. In order to have dark energy with a negative pressure dominating, then w should be < -1/3. And w = -1 gives us the cosmological constant form. Image credit: www.scholarpedia.org Cosmologists seek to determine w, and whether it varies over time scales of billions of years. The Concordance model is not very well tested at high redshifts with z > 1 (corresponding to epochs of the universe less than half the current age) other than with the cosmic microwave background data. Recently two Italian researchers, Risaliti and Lusso have examined datasets of high-redshift quasars to investigate whether the Concordance model fits. Typically supernovae are employed for the redshift-distance relation, and cosmological models are tested against the observed relationship, known as the Hubble diagram. The authors use X-ray and ultraviolet fluxes of quasars to extend the diagram to high redshifts (greater distances, earlier epochs), and calibrate observed quasar luminosities with the supernovae data sets. Their analysis drew from a sample of 1600 quasars with redshifts up to 5 and including a new sample of 30 high redshift z ~ 3 quasars, observed with the European XMM-Newton satellite. They claim a 4 standard deviation variance for z > 2, a reasonably high significance. Models with a varying w include quintessence models, with time-varying scalar fields. If w decreases below -1, it is known as phantom energy. Their results are suggestive of a value of w < -1, corresponding to a dark or phantom energy increasing with time. For convenience cosmologists introduce a second parameter for possible evolution in w, writing as: w = w0 + wa*(1-a)   ,where a, the scale factor equals 1/(1+z) and a = 1 for present day. The best fit results for their analysis are with w0 = -1.4 and wa ~ 1, but these results have large errors, as shown in Figure 4 above, from their paper. Their results are within the red (2 standard deviation, or σ) and orange (3σ) contours. The outer 3σ contours almost touch the cosmological constant point that has w0 = -1 and wa = 0. These are intriguing results that require further investigation. They are antithetical to quintessence models, and apparently in tension with a simple cosmological constant. The researchers plan on further analysis in future work by including Baryon Acoustic Oscillation (large scale galaxy clustering) measurements at z > 2. References https://darkmatterdarkenergy.com/2017/08/10/dark-energy-survey-first-results-canonical-cosmology-supported/ – Results from Dark Energy Survey of galaxies Risaliti, G. and Lusso, E. 2018 Cosmological constraints from the Hubble diagram of quasars at high redshifts https://arxiv.org/abs/1811.02590 Advertisement ## Dark Matter: Made of Sterile Neutrinos? Composite image of the Bullet Group showing galaxies, hot gas (shown in pink) and dark matter (indicated in blue). Credit: ESA / XMM-Newton / F. Gastaldello (INAF/IASF, Milano, Italy) / CFHTLS What’s more elusive than a neutrino? Why a sterile neutrino, of course. In the Standard Model of particle physics there are 3 types of “regular” neutrinos. The ghost-like neutrinos are electrically neutral particles with 1/2 integer spins and very small masses. Neutrinos are produced in weak interactions, for example when a neutron decays to a proton and an electron. The 3 types are paired with the electron and its heavier cousins, and are known as electron neutrinos, muon neutrinos, and tau neutrinos (νe, νμ, ντ). A postulated extension to the Standard Model would allow a new type of neutrino, known as a sterile neutrino. “Sterile” refers to the fact that this hypothetical particle would not feel the standard weak interaction, but would couple to regular neutrino oscillations (neutrinos oscillate among the 3 types, and until this was realized there was consternation around the low number of solar neutrinos detected). Sterile neutrinos are more ghostly than regular neutrinos! The sterile neutrino would be a neutral particle, like other neutrino types, and would be a fermion, with spin 1/2. The number of types, and the respective masses, of sterile neutrinos (assuming they exist) is unknown. Since they are electrically neutral and do not feel the standard weak interaction they are very difficult to detect. But the fact that they are very hard to detect is just what makes them candidates for dark matter, since they still interact gravitationally due to their mass. What about regular neutrinos as the source of dark matter? The problem is that their masses are too low, less than 1/3 of an eV (electron-Volt) total for the three types. They are thus “too hot” (speeds and velocity dispersions too high, being relativistic) to explain the observed properties of galaxy formation and clumping into groups and clusters. The dark matter should be “cold” or non-relativistic, or at least no more than “warm”, to correctly reproduce the pattern of galaxy groups, filaments, and clusters we observe in our Universe. Constraints can be placed on the minimum mass for a sterile neutrino to be a good dark matter candidate. Observations of the cosmic microwave background and of hydrogen Lyman-alpha emission in quasar spectra have been used to set a lower bound of 2 keV for the sterile neutrino’s mass, if it is the predominant component of dark matter. A sterile neutrino with this mass or larger is expected to have a decay channel into a photon with half of the rest-mass energy and a regular (active) neutrino with half the energy. A recent suggestion is that an X-ray emission feature seen at 3.56 keV (kilo-electron Volts) from galaxy clusters is a result of the decay of sterile neutrinos into photons with that energy plus active (regular) neutrinos with similar energy. This X-ray emission line has been seen in a data set from the XMM-Newton satellite that stacks results from 73 clusters of galaxies together. The line was detected in 2 different instruments with around 4 or 5 standard deviations significance, so the existence of the line itself is on a rather strong footing. However, it is necessary to prove that the line is not from an atomic transition from argon or some other element. The researchers argue that an argon line should be much, much weaker than the feature that is detected. In addition, a second team of researchers, also using XMM-Newton data have claimed detection of lines at the same 3.56 keV energy in the Perseus cluster of galaxies as well as our neighbor, the Andromeda galaxy. There are no expected atomic transition lines at this energy, so the dark matter decay possibility has been suggested by both teams. An argon line around 3.62 KeV is a possible influence on the signal, but is expected to be very much weaker. Confirmation of these XMM-Newton results are required from other experiments in order to gain more confidence in the reality of the 3.56 keV feature, regardless of its cause, and to eliminate with certainty the possibility of an atomic transition origin. Analysis of stacked galaxy cluster data is currently underway for two other X-ray satellite missions, Chandra and Suzaku. In addition, the astrophysics community eagerly awaits the upcoming Astro-H mission, a Japanese X-ray astronomy satellite planned for launch in 2015. It should be able to not only confirm the 3.56 keV X-ray line (if indeed real), but also detect it within our own Milky Way galaxy. Thus the hypothesis is for dark matter composed primarily of sterile neutrinos of a little over 7.1 keV in mass (in E = mc^2 terms), and that the sterile neutrino has a decay channel to an X-ray photon and regular neutrino. Each decay product would have an energy of about 3.56 keV. Such a 7 keV sterile neutrino is plausible with respect to the known density of dark matter and various cosmological and particle physics constraints. If the dark matter is primarily due to this sterile neutrino, then it falls into the “warm” dark matter domain, intermediate between “cold” dark matter due to very heavy particles, or “hot” dark matter due to very light particles. The abundance of dwarf satellite galaxies found in the Milky Way’s neighborhood is lower than predicted from cold dark matter models. Warm dark matter could solve this problem. As Dr. Abazajian puts in in his recent paper “Resonantly Produced 7 keV Sterile Neutrino Dark Matter Models and the Properties of Milky Way Satellites” the parameters necessary in these models to produce the full dark matter density fulfill previously determined requirements to successfully match the Milky Way galaxy’s total satellite abundance, the satellites’ radial distribution, and their mass density profile..
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8912540078163147, "perplexity": 956.302333278188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00379.warc.gz"}
https://www.physicsforums.com/threads/integrate-this.105361/
Integrate This 1. Dec 29, 2005 samalkhaiat Happy new year for you all, This is very nice place, So girls and boys, let me give you something to play with: Integrate (r^.a)(r^.b)(r^.c)(r^.d) d(cos8) from cos8=1 to cos8=-1. a,b,c and d are constant vectors, r^ is the unit vector (iX+jY+kZ)/|r|, 8 is the spherical angle between r^ and Z-axis. The dot represents the scalar product. When you have done that, do this one: (a.r^)/(1+K.r^) d(cos8) the limits of integration same as befor, a and K are constant vectors. cheers sam 2. Dec 29, 2005 abdo375 plzzzzzzzzzzzzzzzzzzzzz take sometime and learn latex 3. Dec 29, 2005 Tom Mattson Staff Emeritus :rofl: :rofl: I have to agree, I can't really tell what's supposed to be going on there with the carets. And integrating with respect to the number 8? Is that really what you meant? 4. Dec 30, 2005 Lisa! I've not noticed we could use $$8$$ instead of $$\theta$$ ! Last edited: Dec 30, 2005 5. Dec 30, 2005 Tide $$\frac {2}{5} a b c d$$ I was tempted to say 0 because 8 is usually a constant so that d cos(8) = 0. ;) 6. Dec 30, 2005 samalkhaiat [ [/QUOTE] 1) I don't have that "sometime". 2) It is to complicated for my little brain. sam 7. Dec 30, 2005 fargoth 1) I don't have that "sometime". 2) It is to complicated for my little brain. sam[/QUOTE] just click on the equations and see how they wrote it... im sure youd figure it out... 8. Dec 30, 2005 Pengwuino Find time :) And if you can integrate, i don't think its too complicated for you... especially with a program such as texaide 9. Dec 30, 2005 samalkhaiat [ The carets used to distinguish between a vector and its "unit vector":rofl::rofl: [/QUOTE] No. The 8 , as I said, is the spherical angle(do you remember theta), it is just a symbol:tongue2: :tongue2: sam 10. Dec 30, 2005 samalkhaiat Yes darling, you could if you think of 8 as symbol.:zzz: sam 11. Dec 30, 2005 samalkhaiat Yes, now this answer is write, if 8 was a number, but it is not. So,even your temptation is wrong.:tongue2: :rofl: good luck sam 12. Dec 30, 2005 Tide samal, Prove it! :) 13. Dec 30, 2005 samalkhaiat 14. Dec 30, 2005 fargoth the final answer has mixed parts of a b c and d in it, not only $$a_xb_x$$ etc. so it cant be abcd 15. Dec 30, 2005 fargoth 16. Dec 30, 2005 samalkhaiat Last edited: Dec 30, 2005 17. Dec 30, 2005 samalkhaiat 18. Dec 30, 2005 samalkhaiat 19. Dec 31, 2005 Lisa! I meant I hadn't noticed te similarity, dear! 20. Dec 31, 2005
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497960209846497, "perplexity": 3833.197316459737}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161501.96/warc/CC-MAIN-20180925103454-20180925123854-00083.warc.gz"}
http://physics.stackexchange.com/questions/48030/what-is-the-value-of-a-quantum-field/48033
# What is the value of a quantum field? As far as I'm aware (please correct me if I'm wrong) quantum fields are simply operators, constructed from a linear combination of creation and annihilation operators, which are defined at every point in space. These then act on the vacuum or existing states. We have one of these fields for every type of particle we know. E.g. we have one electron field for all electrons. So what does it mean to say that a quantum field is real or complex valued. What do we have that takes a real or complex value? Is it the operator itself or the eigenvalue given back after it acts on a state? Similarly when we have fermion fields that are Grassmann valued what is it that we get that takes the form of a Grassmann number? The original reason I considered this is that I read boson fields take real or complex values whilst fermionic fields take Grassmann variables as their values. But I was confused by what these values actually tell us. - When physicists say that a quantum field $\phi(x)$ is real-valued, they are usually referring to Feynman's path integral formulation of quantum field theory, which is equivalent to Schwinger's operator formulation. The values of a field $\phi(x)$ in the path integral formulations are numbers. E.g.: • If the numbers are real, we say that the field $\phi(x)$ is real-valued. (Such a field $\phi(x)$ typically corresponds to a Hermitian field operator $\hat{\phi}(x)$ in the operator formalism.) • If the numbers are complex, we say that the field $\phi(x)$ is complex-valued. • If the numbers are Grassmann-odd, we say that the field $\phi(x)$ is Grassmann-odd. (The numbers in this case are so-called supernumbers. See also this Phys.SE post.) - Thank you very much :). –  Siraj R Khan Dec 31 '12 at 21:28 Incidentally, was my initial interpretation of a quantum field (as a linar combination of creation/annihilation operators at every point) correct as well, or are there subtleties that I'm missing? –  Siraj R Khan Jan 1 '13 at 3:01 You are essentially correct, though there are difficulties due to interactions in perturbation theory. Typically the fields in the interacting case have a nonzero amplitude to produce multi-particle states from the vacuum. You could think of it like this: the bare electron field operator creates a bare electron, which is a superposition of a real "dressed" electron and a bunch of other states. Renormalisation rescales the field so that the single-particle amplitude is simple, and you do extra things (LSZ reduction) to remove the multi-particle contributions from physical amplitudes. –  Michael Brown Jan 1 '13 at 3:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245274662971497, "perplexity": 332.0752902887987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095632.21/warc/CC-MAIN-20150627031815-00256-ip-10-179-60-89.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2201221/exercise-on-linking-number-on-milnors-topology-from-the-differentiable-viewpoin
# Exercise on linking number on Milnor's Topology from the differentiable viewpoint It's problem 13 on the book. For disjoint compact manifold $M, N$ with no boundary in $R^{k+1}, m+n=k$, define their linking number $l(M,N)$ by the degree of mapping$\lambda (x,y)=\frac{x-y}{||x-y||}$. If M is the boundary of an oriented manifold $\Sigma$ disjoint to $N$, prove that $l(M,N)=0$. I thought it might be divided into two cases. 1. When $\Sigma$ is unbounded, it seems possible to find an $\widetilde{M}$ homotopic to $M$ which is far away from $N$, thus $\lambda \simeq 0$. 2. When $\Sigma$ is bounded, $M$ seems to be retracted to a single point. I want to make the illusion to be rigorous if possible. Any help is appreciated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719161987304688, "perplexity": 143.77007937709502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00400.warc.gz"}
https://www.physicsforums.com/threads/the-ballistic-pendulum.169602/
# The ballistic pendulum 1. May 9, 2007 ### leykis101 i did a lab, the ballistic pendulum, and need help with one of the questions in the lab. here it is: The initial kinetic energy of the projectile is (mv^2/2). The final potential energy of the system is (M+m)gh. However these two values are not equal in this experiment. Explain what happened to the missing energy and why it doesnt really affect the results. m=the mass of the ball M=the mass of the pendulum any help? thanks 2. May 9, 2007 ### Staff: Mentor What do you think? Hint: What kind of collision does the projectile undergo? 3. May 9, 2007 ### leykis101 if i had to take an educated guess i would have to say that energy is lost to heat when the ball is embedded into the pendulum. because of this the energy at the top is less than the energy at the bottom. Im not sure if this is correct though. 4. May 9, 2007 ### Staff: Mentor Completely correct! The ball and pendulum undergo an inelastic collision, which transforms some of the ball's original KE into thermal energy. OK, so why doesn't this "loss" of energy affect your results? 5. May 9, 2007 ### Hootenanny Staff Emeritus Indeed, that is correct. So that would make this collision an # collision. You could also perhaps mention air resistance, although it is most probably negligable. So why wouldn't this loss of energy affect your experiment? Edit: The Doc strikes again... 6. May 9, 2007 ### DAKONG the energy conservation formula refer in ideal why you test many times?... because there are manythings in reality such as force againts the wind or rusty of metal yes :!!) maybe sound and heat Last edited: May 9, 2007
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085308313369751, "perplexity": 1541.0857738381603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721558.87/warc/CC-MAIN-20161020183841-00079-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.physics.brocku.ca/Courses/1P21_Sternin/RotationalMotion/dynamics-rot.php
Home > Courses > 1P21_Sternin > RotationalMotion Centripetal force $F_c = ma_c = {{mV^2}\over r}=m\omega^2 r, \qquad \vec{F_c}\> \|\> \vec{a_c}\> \bot\> \vec{V}$ $F_c$ can be provided by: model airplane: $F_c=T$, tension in the wire satellite in orbit: $F_c=G{{mM_E}\over{R^2}}$, Earth's gravity car turning: $F_c=f_s$, static friction (between the tire and the road)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909039318561554, "perplexity": 1082.7356172020957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00612.warc.gz"}
https://www.physicsforums.com/threads/funloving-sliding-pig.192277/
Funloving sliding pig 1. Oct 18, 2007 KMjuniormint5 A slide loving pig slides down a certain 19° slide in twice the time it would take to slide down a frictionless 19° slide. What is the coefficient of kinetic friction between the pig and the slide? here is how i approached the problem: since on a slide i changed the x-axis going with the slide with positive heading down and the y-axis facing in the n((Force normal) direction of the pig). i split up the weight force into: Wy = -(W)cos(19) Wx = (W)sin(19) and I also know that Fk = n Mk (Mk = coefficient of kinetic friction (mew k)) since there is no acceleration in the y-direction it is safe to say Wy = n so. . . in the x-direction Fnetx = ma (acceleration in x-direction) break that down to. . . Wx - Fk = m a (acceleration in x direction) The part I am confused about is what does the time have to do with this problem and since i am not given the mass of the pig I can not go any further. . . any tips to keep me going? 2. Oct 18, 2007 chocokat I did this problem a couple of weeks ago, and I got an actual numerical answer for the coefficient of friction in the end, but I'll warn you, it's alot of work, and you have to be careful as you go. Here's a clue. To start, you need to figure out the acceleration in the 'x' direction by setting up free body diagrams and getting the forces all down. And instead of using W, you should use mg. Start with the non-friction slide, that'll give you a leg up on the friction slide. If you post stuff, I'll help you as you go. It does seem like there's not alot of information, but there really is, so good luck. 3. Oct 18, 2007 TMM I think you can say that the final velocity will be twice as much for the frictionless ramp because acceleration is constant. From there it is an energy problem that is independant of mass or height of ramp. 4. Oct 18, 2007 KMjuniormint5 for the frictionless slide there is no friction. . .so the only force acting in the x direction would be Wx which is Wsin(19) . . .and break that down to m*g*sin(19) = m*a could you then divide both sides by m (which would cancel each other out leaving you with . . . g*sin(19) = a . . .so then a = 9.8 * sin(19) . . .a = 3.19 m/s^2 . . . since he went down the ramp at twice the time on the frictionless ramp does that mean divide a by 2? 5. Oct 18, 2007 chocokat When I did this, it was a force problem (we had not done energy stuff yet). So if you continue this way, I don't know that you can use the double the time yet. I agree with your acceleration of 3.19. Next I did the friction slide. It starts out basically the same as the non-friction slide, but you have to add the fact that there is friction, so now there are two forces on the friction ramp, and you can solve for acceleration again. It'll have the $$\mu_k$$ in that equation. For the last step, since the ramp lengths are the same, I used the equation: $$x_2 = x_1 v_1t + \frac{1}{2}at^2$$ Since $$v_1 = 0$$ and $$x_1 = 0$$ those pieces go out, and for the second equation make sure you use $$2t^2$$, then set the two equations equal to eachother and you can solve for $$\mu_k$$ 6. Oct 18, 2007 KMjuniormint5 you forgot a '+' sign inbetween the x1 and v1t but regardless. . .ok so ya i agree with you the final velocity is 0 and the starting postion is 0. but what i do not understand is what are you trying to find? x2 or t? because lets say you set them equal to each other using the 3.19 as acceleration . . .so it looks something like so Frictionless: x = 1.595t^2 Friction: x = 3.19t^2 and how is that going to help you solve for $$\mu_k$$ if the formula to find $$\mu_k$$ is Fk = n $$\mu_k$$ and i understand that there is no acceleration in the y direction . . .so n = Wy and Wy = -mgcos(19) so. . . Fk / (-mgcos19) = $$\mu_k$$ 7. Oct 18, 2007 chocokat Did you do the forces for the friction slide, then solve for the 'x' acceleration on that slide. Instead of it just being a = 3.19, it'll be a = 3.19 + (or minus) the friction force, $$\mu$$ or $$\mu$$mg(cos19). You can do the calculations. After that, you need to solve for $$\mu$$. I used the equation below. as you start plugging stuff in, more erroneous stuff (like 't') will cancel, and you can solve for $$\mu$$. 8. Oct 18, 2007 KMjuniormint5 alright i will try to comprend everything and see if i can come up with an answer and I will let you know if I got an answer chocokat by tomorrow (20 hours by now) thanks for all the help . . again ill let you know if i came up with an answer 20 hours from now. 9. Oct 18, 2007 chocokat O.k., goodluck kiddo. As I mentioned earlier, just be careful with your calculations and + or - signs, because you don't want to get messed up by a little mistake, they're so annoying. 10. Oct 19, 2007 KMjuniormint5 ok sorry chocokat but im still lost. . .here is what i have so far (this is only acceleration for x because in the y direction is 0) . . . acceleration for the frictionless slide is a = 3.19 so we plug that into the equation $$x_2 = x_1 + v_1t + \frac{1}{2}at^2$$ and i came up with x = 1/2(3.19)t^2 and i solved that for t so t = square root of (x/1.595) then for the friction slide i got a = 3.19 - (mu k)mgcos19 it is minus because friction is acting the opposite way of motion so if i plug it into the equation of $$x_2 = x_1 + v_1t + \frac{1}{2}at^2$$ i get x = 3.19- (mu k) mgcos19(t^2) and i solved that for t to get t = square root of (x/3.19- (mu k) mgcos19) now i set those two equations equal to each other. . .but i do not know where to go from there. . . Last edited: Oct 19, 2007 11. Oct 19, 2007 chocokat O.k., this is what you do. For the frictionless slide you have $$a = 3.19 m \backslash s^2$$ For the slide with friction you have: $$a = 3.19 m \backslash s^2 - \mu_k 9.267 m \backslash s^2$$ 1. Now, put those into two equations. On the equation with the slide with friction, remember to put $$2t^2$$ Don't do any further calculations at this point, just leave stuff as is, it will make it easier in a couple of steps. For example $$\frac{1}{2} 3.19t^2$$ leave the $$\frac{1}{2}$$ alone, don't multiply it by the 3.19. 2. Set the equations equal to eachother (since the distance 'x' is the same for both slides, the first equation equals the second equation) If you don't understand this, please let me know and I'll try to explain it further. So now, there are no x's. 3. If you haven't multiplied anything through, such as the $$\frac{1}{2}$$, you should notice that you can easily multiply stuff out. If you get stuck here, let me know. After this, you should be able to solve for $$\mu_k$$ 12. Oct 19, 2007 KMjuniormint5 ok so now i take (1/2)3.19t^2 = (1/2)(3.19 - $$\mu_k$$(9.269))2t^2 . . . you can cancel out the 1/2 andt^2 and that leaves you alone with 3.19 = 2(3.19 - $$\mu_k$$(9.267) so shouldnt $$\mu_k$$ = 0.17212 13. Oct 19, 2007 KMjuniormint5 and i understand about the x's 14. Oct 19, 2007 chocokat You're so close. For the right side, the 't' is $$(2t)^2$$ so the '2' becomes '4' $$(2t)^2 = 4t^2$$. Then when you finish the calculations: 3.19 = 4(3.19 - $$\mu_k$$9.267) so the final answer is $$\mu_k = .258$$ This problem was alot of work, but one of the key things to remember is that even though you weren't given alot of information, you were able to calculate alot, and if you run into similar problems, sometimes it's easier not to 'solve' for variables for awhile, as they end up cancelling out making the solution easier to calculate. And be careful as you go. I make so many dumb mistakes missing - signs, or with simple math errors, for a big problem just do it step by step to avoid those types of problems, you'll end up saving time and frustration in the long run. Good work! 15. Oct 19, 2007 KMjuniormint5 man choco. . .forgot about that squared part. . .amazing work thanks for helping me out. . .God bless
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225623607635498, "perplexity": 614.9158620483528}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00441-ip-10-31-129-80.ec2.internal.warc.gz"}
https://quant.stackexchange.com/questions/25983/stochastic-discount-factor-transformation
# stochastic discount factor transformation I have $$\frac{dM_t}{M_t}=-\frac{\mu}{\sigma} dW_t + \gamma_t dB_t, \tag{1}$$ where $B_t$ and $W_t$ are two independent Brownian Motions, which was further presented as $$M_t=\exp \left( -\frac{\mu}{\sigma}W_t - \frac{1}{2} \frac{\mu^2}{\sigma^2}t + \int_0^t \gamma_s dB_s -\frac{1}{2} \int_0^t \gamma_s^2 ds \right) \tag{2}$$ can anybody explain how $(1)$ was transitioned into $(2)$? • Made some stylistic changes and typo corrections. – Gordon May 13 '16 at 17:35 Apply Ito's lemma to $\ln M_t$, we obtain that \begin{align*} d\ln M_t &= \frac{1}{M_t} dM_t -\frac{1}{2} \frac{1}{M_t^2} d\langle M, M\rangle_t\\ &=-\frac{\mu}{\sigma} dW_t + \gamma_t dB_t -\frac{1}{2} \frac{1}{M_t^2}\left(\frac{\mu^2}{\sigma^2} + \gamma_t^2\right)M_t^2dt\\ &=-\frac{\mu}{\sigma} dW_t + \gamma_t dB_t -\frac{1}{2} \left(\frac{\mu^2}{\sigma^2} + \gamma_t^2\right) dt.\tag{Eq. 1} \end{align*} Here, since $dM_t=M_t\big[-\frac{\mu}{\sigma} dW_t + \gamma_t dB_t\big]$, \begin{align*} d\langle M, M\rangle_t = \left(\frac{\mu^2}{\sigma^2} + \gamma_t^2\right)M_t^2dt. \end{align*} From $(\textrm{Eq.} 1)$, \begin{align*} \ln M_t - \ln M_0 &= \int_0^t\left[-\frac{\mu}{\sigma} dW_s + \gamma_s dB_s -\frac{1}{2} \left(\frac{\mu^2}{\sigma^2} + \gamma_s^2\right) ds\right]\\ &=-\frac{\mu}{\sigma}W_t - \frac{1}{2} \frac{\mu^2}{\sigma^2}t + \int_0^t \gamma_s dB_s - \frac{1}{2} \int_0^t \gamma_s^2 ds. \end{align*} That is, \begin{align*} M_t = M_0 \exp\left(-\frac{\mu}{\sigma}W_t - \frac{1}{2} \frac{\mu^2}{\sigma^2}t + \int_0^t \gamma_s dB_s - \frac{1}{2} \int_0^t \gamma_s^2 ds\right). \end{align*} • where the $M_t^2$ in the second line comes from? – Michal May 13 '16 at 19:03 • I can see that integrating both sides and solving for $M_t$ \begin{align*} d\ln M_t &=-\frac{\mu}{\sigma} dW_t + \gamma_t dB_t -\frac{1}{2} \left(\frac{\mu^2}{\sigma^2} + \gamma_t^2\right) dt. \end{align*} gives the equation (1), but I don't quite follow the condition for $M_0=1$ – Michal May 13 '16 at 19:33 • @Michal: $M_t$ can not be zero based on the given dynamics. I do not know what is the safe rate. For any quantity, the initial value is assumed to be known. You do not have to assume that $M_0=1$, but, then you need to have a multiplier for your equation $(2)$. – Gordon May 13 '16 at 20:08 • @Michal: Do you mean that $M_t$ is a constant? Note that, you may be provide with the initial $M_0$, or you can assume that $M_0=1$. I do not know why you want to say that $M_t=1$. If that is the case, you do not have to model it using an SDE as it is a constant. My answer above is based on the SDE you provided. You can force it to be a constant by assuming that $\mu=0$ and $\gamma_t=0$, for all $t$. – Gordon May 13 '16 at 20:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960815906524658, "perplexity": 1011.5924205195108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00499.warc.gz"}
https://socratic.org/questions/580a03f17c01490f6fe6c148
Precalculus Topics # Question 6c148 Jan 11, 2017 The midpoint of the chord will start at one of the two points where the slope of the tangent line is equal to 1 and end at the other point where the slope of the tangent line is equal to 1. #### Explanation: Therefore, we need to find the two points on the ellipse where the slope of the tangent line is equal to 1. Implicitly differentiate the given equation: $\frac{d \left({x}^{2}\right)}{\mathrm{dx}} + \frac{d \left(4 {y}^{2}\right)}{\mathrm{dx}} = \frac{d \left(1\right)}{\mathrm{dx}}$ $2 x + 8 y \frac{\mathrm{dy}}{\mathrm{dx}} = 0$ $8 y \frac{\mathrm{dy}}{\mathrm{dx}} = - 2 x$ $\frac{\mathrm{dy}}{\mathrm{dx}} = - \frac{x}{4 y}$ Set $\frac{\mathrm{dy}}{\mathrm{dx}}$ = 1: $- \frac{x}{4 y} = 1$ $y = - \frac{x}{4}$ Substitute $- \frac{x}{4}$ for y into the equation for the ellipse: ${x}^{2} + 4 {x}^{2} / 16 = 1$ $\frac{5}{4} {x}^{2} = 1$ ${x}^{2} = \frac{4}{5}$ $x = \pm 2 \frac{\sqrt{5}}{5}$ Substitute $\frac{4}{5}$ for ${x}^{2}$ into the equation for the ellipse: $\frac{4}{5} + 4 {y}^{2} = 1$ $4 {y}^{2} = \frac{1}{5}$ ${y}^{2} = \frac{1}{20}$ $y = \pm \frac{\sqrt{5}}{10}$ The locus of the midpoint is a line segment between the points: $\left(- 2 \frac{\sqrt{5}}{5} , \frac{\sqrt{5}}{10}\right) \mathmr{and} \left(2 \frac{\sqrt{5}}{5} , - \frac{\sqrt{5}}{10}\right)$ m = $\frac{- \frac{\sqrt{5}}{10} - \frac{\sqrt{5}}{10}}{2 \frac{\sqrt{5}}{5} - - 2 \frac{\sqrt{5}}{5}} = - \frac{1}{4}$ The equation of the locus is the line segment: y = -1/4(x - 2sqrt(5)/5) - sqrt5/10; -2sqrt(5)/5< x < 2sqrt(5)/5# ##### Impact of this question 314 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075518608093262, "perplexity": 315.14164922129845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00336.warc.gz"}
https://matematiku.wordpress.com/category/geometry/
# ln (phi) dalam Fungsi Hiperbolik Bentuk lain $\ln (\phi)$ dalam fungsi hiperbolik, tepatnya pada inverse hyperbolic cosecant dari 2, atau inverse hyperbolic sine dari 1/2: $\displaystyle \boxed {\ln(\phi)=\mathrm{csch}^{-1}(2)= \mathrm{sinh}^{-1}\left ( \frac{1}{2} \right )= 0.481211825....}$ Formula kunci kemunculan $\ln(\phi)$ dalam fungsi hiperbolik dari adanya masalah ini  : $\displaystyle \boxed {e^k-e^{-k}=1}$ atau $\displaystyle \boxed {e^{2k}-e^k-1=0}$, $\boxed {k=\ln(\phi)}$ atau $\boxed {e^{k}=\phi}$ dengan $e$ adalah konstanta Euler dan $\phi$ adalah golden ratio. Ini juga tak kalah menarik, bisa dibuktikan pula bahwa $\ln (\phi)$ dapat dinyatakan dalam perluasan deret berikut ini : $\displaystyle \boxed {\ln(\phi)=\sum_{n=0}^{\infty} \frac{(-1)^n(2n)!}{2^{4n+1}(2 n+1)(n!)^2}= 0.481211825....}$ # Boy’s Surfaces Boy’s surface is a nonorientable surface found by Werner Boy in 1901. Boy’s surface is an immersion of the real projective plane in 3-dimensional without infinities and singularities, but it meets itself in a triple point (self-intersect). The images below are generated using 3D-XplorMath, and with the “optimal” Bryant-Kusner parametrization when $a= 0.5$, $-1.45 < u < 0$, and $0 < v < 2\pi$ : Other viewpoints : Projective Plane There are many ways to make a model of the Boy’s surface using the projective plane, one of them is to take a disc, and join together opposite points on the edge with self intersection (note : In fact, this can not be done in three dimensions without self intersections); so the disc must pass through itself somewhere. The Boy’s surface can be obtained by sewing a corresponding band (Möbius band) round the edge of a disc. The $\mathbb{R}^{3}$ Parametrization of Boy’s surface Rob Kusner and Robert Bryant discovered the beautiful parametrization of the Boy’s surface on a given complex number $z$, where $\left | z \right |\leq 1$, so that giving the Cartesian coordinates $\left ( X,Y,Z \right )$ of a point on the surface. In 1986 Apéry gave  the analytic equations for the general method of nonorientable surfaces. Following this standard form, $\mathbb{R}^{3}$ parametrization of the Boy’s surface can also be written as a smooth deformation given by the equations : $x\left ( u,v \right )= \frac {\sqrt{2}\cos\left ( 2u \right ) \cos^{2}\left ( v \right )+\cos \left ( u \right )\sin\left ( 2v \right ) }{D},$ $y\left ( u,v \right )= \frac {\sqrt{2}\sin\left ( 2u \right ) \cos^{2}\left ( v \right )-\sin \left ( u \right )\sin\left ( 2v \right ) }{D},$ $z\left ( u,v \right )= \frac {3 \cos^{2}\left ( v \right )}{D}.$ where $D= 2-a\sqrt{2}\sin\left ( 3u \right )\sin\left ( 2v \right )$, $a$ varies from 0 to 1. Here are some links related to Boy’s surface : # Parametric Breather Pseudospherical Surface Parametric breather surfaces are known in one-to-one correspondence with the solutions of a certain non-linear wave-equation, i.e., the so-called Sine-Gordon Equation. It turns out, solutions to this equation correspond to unique pseudospherical surfaces, namely soliton. Breather surface corresponds to a time-periodic 2-soliton solution. Parametric breather surface has the following parametric equations : $x = -u+\frac{2\left(1-a^2\right)\cosh(au)\sinh(au)}{a\left(\left(1-a^2\right)\cosh^2(au)+a^2\,\sin^2\left(\sqrt{1-a^2}v\right)\right)}$ $y = \frac{2\sqrt{1-a^2}\cosh(au)\left(-\sqrt{1-a^2}\cos(v)\cos\left(\sqrt{1-a^2}v\right)-\sin(v)\sin\left(\sqrt{1-a^2}v\right)\right)}{a\left(\left(1-a^2\right)\cosh^2(au)+a^2\,\sin^2\left(\sqrt{1-a^2}v\right)\right)}$ $z = \frac{2\sqrt{1-a^2}\cosh(au)\left(-\sqrt{1-a^2}\sin(v)\cos\left(\sqrt{1-a^2}v\right)+\cos(v)\sin\left(\sqrt{1-a^2}v\right)\right)}{a\left(\left(1-a^2\right)\cosh^2(au)+a^2\,\sin^2\left(\sqrt{1-a^2}v\right)\right)}$ Where $0, $u$ controls how far the tip goes, and $v$ controls the girth. When $a=0.4$, $-13, and $-38 : With orthographic projection : Surface in $\mathbb{R}^{3}$ having constant Gaussian curvature $K= -1$ are usually called pseudospherical surfaces. If $X : M \subset \mathbb{R}^{3}$ is a surface with Gaussian curvature $K= -1$ then it is known that there exists a local asymptotic coordinate system $(x,t)$ on $M$ such that the first and second fundamental forms are: $dx^{2}+dt^{2}+2 \cos{q}~dx~dt$, and $2 \sin{q}~dx~dt$, where $q$ is the angle between asymptotic lines (the x-curves and t-curves). The Gauss-Codazzi equations for$M$ in these coordinates become a single equation, the sine-Gordon equation (SGE) : $q_{xt}= \sin{q}$ The SGE is one of the model soliton equations. • Chuu-Lian Terng. 2004. Lecture notes on curves and surfaces in $\mathbb{R}^{3}$, available here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 45, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008306860923767, "perplexity": 3204.9731815540326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424846.81/warc/CC-MAIN-20170724102308-20170724122308-00005.warc.gz"}
https://www.physicsforums.com/threads/check-physics-answers.10894/
# Homework Help: Check physics answers 1. Dec 13, 2003 ### fish problem #1 Two point charges of +3.0uC and -7.0uC are placed at x=0 and x=.20m, respectively. What is the magnitude of the electric field at the point midway between them (x=.10m)? E3 = (kq)/r^2 = [(9x10^9 )(3x10^-6 C)]/.10m^2 = 2.7x10^6 N/C E7 = (kq)/r^2 = [(9x10^9 )(7x10^-6 C)]/.10m^2 = 6.3x10^6 N/C E=E3+E7 = 9x10^6 N/C solutions manual has the answer as 9x10^5 N/C problem #2 If the distance between two closely spaced parallel planes is halved, the electric field between them: becomes 4 times as large. E=4/A E=4/d^2 if d=2, E=1 if d=1, E=4 so becomes 4 times as large solutions manual has the answer as remains the same problem #3 A 12-V battery is connected to a parallel-plate capacitor with a plate area of 0.40 m^2 and a plate seperation of 2.0 mm. How much energy is stored in the capacitor? A=0.40 m^2 d=2.0 mm = .002 m V= 12V find Uc C=(EoA)/d = (8.85x10^-12 x .40 m^2)/.002 m C= 1.77x10^-9 F Uc=1/2cv^2 = 1/2(1.77x10^-9 F)(12v)^2 = 1.27x10^-7 J solutions manual has the answer as 2.5x10^-7 J 2. Dec 14, 2003 ### Staff: Mentor Your answers to #1 and #3 look good to me. I'm not sure what you are doing for #2, but in any case: as long as the charge remains the same, the field within the (closely spaced) plates is independent of separation. 3. Dec 14, 2003 ### fish thanks Doc, in #2 I was getting confused about the (A)area= d^2 vs the separation distance. Q remains same, E doesn't depend on separation distance between two closely spaced parallel plates.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888130784034729, "perplexity": 4176.917168713894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823712.21/warc/CC-MAIN-20181212022517-20181212044017-00111.warc.gz"}
https://asmedigitalcollection.asme.org/PVP/proceedings-abstract/PVP2009/43697/243/337353
CrMoV heavy reactors fabrication has undergone some weld metal reheat cracking issues during the end 2007 / beginning 2008. This paper addresses this particular problem and explains the methodology used to solve it. Usual techniques, such as chemical factors and required NDE, have shown their limits, underlining the fact that this problem was unpredictable. Special mechanical testing as well as very accurate chemical analyses have allowed the authors to find the root cause. Very small amounts of particular impurities were responsible for the cracking and a criterion is then proposed to ensure that the problem will not come out again. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8207263350486755, "perplexity": 1429.9152203481724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00557.warc.gz"}
https://www.kopykitab.com/blog/mumbai-university-previous-year-question-papers-electromagnetic-engineering-may-2010/
# Mumbai University Previous year question papers ## Electromagnetic Engineering N.S. (1) Question NO.1 is compulsory. (2) Attempt any four questions from the remaining questions. (3) Assume any suitable data wherever necessary. (4) Use of Smith chart is allowed. 1. (a) Explain parallel polarisation. (b) What is intrinsic impedance of free space? (c) Derive transmission line equation. (d) Explain the concept of displacement current. 2. (a) Derive Maxwell’s equation in integral form. 10 (b) What is a uniform plane wave? Stating from Maxwell equation derive wave equation for free space. 3. (a) The electric field intensity associated with a plane wave travelling in a perfect dielectric medium is given by – Ex (z, t) = 10 cos (2n x 107t – 0.1nx) vim (i) What is velocity of propagation? (ii) Write down an expression for magnetic field intensity associated with the wave if II = ~lo (b) The net electric field at the boundary of two regions is L 0° vim. The region 2 is free space and properties of region 1 are  cr = 9 and llr = 1. Using the phase method, calculate EX1and HY1at d = 1 cm and d = 2 cm; if the wave frequency is 1250 MHz. 4. (a) Show the circuit representation of parallel plane transmission line and derive characteristic impedance. (b) Show that a co-axial line having an outer conductor of radius ‘b’ is having a minimum attenuation when radius “a’ of inner conductor satisfies the ratio b/a = 3.6 5. (a) Explain potential functions for sinusoidal radiation oscillations. 10 (b) Match load impedance of ZL = 100 + j80 Q to a 50 Q line using a single series open circuited stub. Use Smith chart. 6. (a) Explain various types of electromagnetic interferences. (b) State boundary condition in scalar and vector form. 7. (a) For electromagnetic wave prove that E x H = 0 and E x H is having the direction of propagation of the wave. (b) If the characteristic impedance of the line is 50 Q and VSWR = e = 2; when the line is loaded. When the load is shorted, the minima shift is 0.15 X towards the load. Determine load impedance using Smith chart
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817444384098053, "perplexity": 1902.6378665307002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00015.warc.gz"}
https://www.physicsforums.com/threads/heisenberg-imcertainty-principle-get-it.78949/
# Heisenberg imcertainty principle (get it!) 1. Jun 13, 2005 ### mrfeathers I dont understand why it is so hard to find the exact position and velocity of orbiting electron. And also, why would we want to know it, if it is always moving? im not trying to disprove it or anything, so dont make fun of me, i am an uneducated peon 2. Jun 13, 2005 ### buddingscientist whats your idea of finding both the exact position and velocity of the electron? how do you find the location of your keyboard? shine light on it (lamp or diffuse sunlight through the window) what happens if you try do the same for something tiny like an electron ? 3. Jun 13, 2005 ### mrfeathers you should do that then, just prove the heisenberg uncertainty therom but just looking at the orbiting electrons with your own eyes 4. Jun 13, 2005 ### buddingscientist when you look at your keyboard, the photons of light that 'interact' with it (and ten hit our eyes and give us what we percieve as colour) are much much smaller than the keyboard. but if we are trying to see an electron using a photon.. the wavelength ("size") of an photon is alot larger than that of an electron, what implications does this have? 5. Jun 14, 2005 ### Dr.Brain For complete understanding of Uncertainity Principle , check out : http://www.doxlab.co.nr/ TOPIC#2 on the above site is Heisenberg's.. Last edited by a moderator: Jun 14, 2005 6. Jun 14, 2005 ### ZapperZ Staff Emeritus I would not recommend the site listed above. It perpetuates the misconception of the HUP that I have written about[1], that it is the uncertainty in a single measurement. I have seen this mistake repeated several times within the past week on here. The HUP is NOT about the uncertainty in INSTRUMENTATION or measurement. One can easily verify this by looking at HOW we measure certain quantities. It is silly for Heisenberg to know about technological advances in the future and how much more accurate we can measure things. This is NOT what the HUP is describing. The HUP is NOT describing how well we know about the quantities in a single measurement. I can make as precise of a measurement of the position and momentum of an electron as arbitrary as I want simultaneously, limited to the technology I have on hand. I can make improvements in my accuracy of one without affecting the accuracy of measurement of the other. What the HUP is telling you is the difference between a classical system and a quantum system. In a classical system, if you have a set of identical initial condition, and you measure ONE observable, and then you measure another observable, you will continue to get the SAME value of that 2nd observable everytime you measure the same value of that 1st observable. The more accurate you measure the 1st observable, the more accuract you can predict the value of the 2nd observable the next time you want to do such a measurement. The only limitation to how accurate you can determine these observable is the limitation to your measuring instruments. But these limitations do NOT scale like the HUP. You don't make one worse as the other one becomes better, because these are technical issues and are not related to one another. On the other hand, in a quantum system, under the IDENTICAL initial conditions, even if you measure a series of identical values for the 1st observable, the 2nd observable may NOT yield the identical result each time. In fact, as you narrow down the uncertainty of the 1st observable, the 2nd observable may start showing wildly different values as you do this REPEATEDLY. Therefore, unlike the classical system, your ability to know and predict what is going to be the outcome of the 2nd observable goes progressively WORSE as you improve your knowledge about the 1st observable! Again, it has NOTHING to do with the uncertainty in a SINGLE measurement! It doesn't mean that if you measure with utmost accuracy the position of an electron, that that electron momentum is "spread out" all over the place. This is wrong! I can STILL make an accurate determination of that electron's momentum - only my instrument will limit my accuracy of determining that. However, my ability to know what its momentum is going to be the NEXT time I measure it under the idential situation is what is dictated by the HUP! Zz [1] [11-15-2004 09:26 AM] - Misconception of the Heisenberg Uncertainty Principle Last edited: Jun 14, 2005 7. Jun 14, 2005 ### matness time is also an observable so why do we always take Δt= 0 while Δx*Δp >= hbar/2 ? 8. Jun 14, 2005 ### ZapperZ Staff Emeritus Who is this "we"? You will note that in typical school problems, you often work with a solution to the time-INDEPENDENT solution. It doesn't mean that the uncertainty in the time period is zero. It means that for that case, it is irrelevant since the description does not contain any time dynamics. Zz. 9. Jun 14, 2005 ### Nomy-the wanderer I've a simple answer, hope u won't consider it naive... Most of the working theories right now weren't based on solid proofs but because we needed them, and some observations needed explanations...Theorists make theories, without confitmations, but we assume they r correct untill they r proven wrong, technology gives us the chance to make sure that we r on the right track, we can't yet find the electron and the uncertainty principle is what really works right now... See u when they find out something strong that would give us the opportunity to observe electrons.. 10. Jun 14, 2005 ### ZapperZ Staff Emeritus What are "solid proofs"? Would the statment that says "IT WORKS" be considered as "solid proof"? How about if I point to you your modern electronics? Would that be considered as "solid proof"? Most people forget that the HUP is a CONSEQUENCE, not the origin, of quantum mechanics! To find a problem in the HUP is to find a problem in QM. And unless people also forgot about the centenial year of QM in 1999, let me remind you that there was an almost universal acclaimed by physicists that QM is THE most successful theory SO FAR in the history of human civilization! This means that if you think QM does not have "solid proofs", then other parts of physics suffer from even a worse level of lacking of solid proofs! Please keep in mind that NO part of physics is considered to be accepted and valid until there are sufficient experimental/empirical agreement! In fact, many theories and ideas originally came out of unexpected experimental observation in the first place! There is no lacking of "solid proofs" for QM, and the HUP. Why this is even brought up here, I have no idea. Zz. 11. Jun 14, 2005 ### matness "i" was thinking these as you said , until i see the word 'simultaneous' for measurements in the defn of HUP. if we say nearly simultaneous then i think there will be no problem(i hope so...) Also there are different explanations for HUP, and maybe this the problem about understanding it. at first i was thinking i get it , but it didnt take a long time for me to confuse (because i am a beginner only) Zz 's article is very helpful but if anyone can send a sketch of proof for HUP it will be more clear thanks n 12. Jun 14, 2005 ### ZapperZ Staff Emeritus I don't understand. You want a "sketch of proof" for the HUP? What is this? I think every student has either done, or seen the diffraction from a single slit. To me, this is a VERY clear example of the HUP! It just happens that we typically use wave description of light to account for such effects. But with the photon picture, the identical diffraction pattern can be directly obtained and the spreading is a direct consequence of the HUP! More? The deBoer effect that is very pronounced in noble gasses is a direct consequence of the HUP. This leads to a correction to the internal energy (and thus, the specific heat capacity) of the gasses at very low temperatures. Only via taking into account such corrections can one obtain the experimental values! But I think people pay waaaaay too much attention at disecting the consequences and forgetting the principles that CAUSE such consequences. The fact that this came out of "First Quantization" principle of QM that is based on [A,B] operations of two non-commuting observables is less understood by many who do not understand the formalism of QM. This is a crucial part of elementary QM with which a whole slew of consequences are built upon! Zz. 13. Jun 14, 2005 ### matness what i wonder is the mathematical part : [A,B]= C --> ΔA * ΔB >= |<C>|/2 is it just a thm about standart deviation? Last edited: Jun 14, 2005 14. Jun 15, 2005 ### seratend In QM, we may define the position of a particle. We may also define the momentum of a particle. However momentum is not the velocity of the particle. Many people in this forum always mix the momentum with the velocity (due to their equality for average values: i.e. Erhenfest Theorem) and tend to make incorrect deductions. QM tells one thing: we cannot associate a classical path to a particle, hence we cannot define a couple (position, velocity) to the particle. The HUP property applied to the (position, momentum) observables just highligh this fact: we cannot find a particle where both position and momentum have "defined" values equal to their mean values (i.e. through the Erhenfest Theorem, they have a classical path if is the case). Seratend. 15. Jun 15, 2005 ### seratend Yes. Seratend. 16. Jul 2, 2005 ### jackle What sort of technology do we have to achieve this? 17. Jul 2, 2005 ### quetzalcoatl9 my (rather naive) understanding of it is that in a classical system: $$xp - px = 0$$ but accord to HUP: $$xp - px \neq 0$$ so that measuring the position and then measuring the momentum, is not the same as measuring the momentum and then measuring the position. infact, they will always differ by $$\frac{ih}{2\pi}$$ position and momentum are not seen as real values, but as non-commutative operators. you can derive the schrodinger wave equation in a straightforward manner from this. (this is one interpretation). 18. Jul 3, 2005 ### Kruger No, that's not correct. The HUP doesn't says something about only one simultaneoulsy measurment. It says something about a serie of simultaneously measurements (always the same conditions). You see? If you make 1000 simultaneoulsy measurments (position and momentum) and you measure the position at each experiment exactly then you will get at each measurement of momentum a completely different value. 19. Jul 4, 2005 ### Igor_S Umm, how can momentum be equal to velocity ? Ehrenfest theorem is about equality of average QM momentum, which is defined as $$-i \hbar \vec \nabla$$ and classical momentum $$m \vec v$$. Or generally it shows that average values of QM operators are equal to corresponding quantities in classical mechanics (I guess that's what you had in mind). --- quetzalcoatl9, HUP actually does say $$\left< (\Delta p_x)^2 \right> \left< (\Delta x)^2 \right> \neq 0$$. What you wrote, are commutation relations for operators, not a HUP (however, it's used in derivation of HUP). And you cannot get deltas by taking ONE particle. Let's say you have an instrument which determines position and momenutum up to 10 decimal places [in some units]. And let's say measurement of momentum of electron always gives one value, eg. 1.0000000001 [in some units]. Then you measure it's position 1st time and get let's say 0.0000000044 [some units]. A set of repeated measurement of position (with momentum set to 1.0000000001) will get you just random results, like 50.3243243212, 0, 13.1313131313, -400000.0000000001, etc. (but each of these measurements will have high precision, and that's the one that depends on the instrument itself!). However, calulating the average of the square of deltas of random numbers like that (you see, I HAVE to have more than one measurement to do average!), gives us a big number. This number appears in HUP. When I let particle momentums have some distribution of let's say between 1.0 and 1.1 , the numbers I get for position will not be so random - they will have a peak value around some number. If I don't control momentum at all, but let particles pass a very small hole (that way I'm controling position) and measure momentum afterwards, I will get random results. Hope this helps! 20. Jul 4, 2005 ### seratend Ok, let's explain the "due to their equality for average values" in my previous post. <P>=m.d<X>/dt= m<V> if no em field (case H=p^2/2m+V(q)). => implictly assuming m=1 units, we have <P>=<V> QED. (I thought it was clear enough, but your post showed I was wrong with this assumption, now I hope it is clearer). However, for a given relation V=dX/dt on operators (e.g. V=P/m), we have the eigenvalue relation v=dx/dt iff [V,X]=0 (if the operators are sufficently "gentle"). If V and X have not the same eigenbasis (in other words they do not commute: [V,X]=/=0) => the relation v=dx/dt is no more valid for the eigenvalues => we cannot associate a classical path to a particle. HUP just reflects this fundamental property of operators. Seratend.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987690806388855, "perplexity": 886.6994638110905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608726.4/warc/CC-MAIN-20170527001952-20170527021952-00110.warc.gz"}
http://math.andrej.com/2007/
Mathematics and Computation A blog about mathematics for computers Postsby categoryby yearall Posts in the year 2007 Seemingly impossible functional programs Andrej has invited me to write about certain surprising functional programs. The first program, due to Ulrich Berger (1990), performs exhaustive search over the “Cantor space” of infinite sequences of binary digits. I have included references at the end. A weak form of exhaustive search amounts to checking whether or not a total predicate holds for all elements of the Cantor space. Thus, this amounts to universal quantification over the Cantor space. Can this possibly be done algorithmically, in finite time? The Role of the Interval Domain in Modern Exact Real Arithmetic With Iztok Kavkler. Abstract: The interval domain was proposed by Dana Scott as a domain-theoretic model for real numbers. It is a successful theoretical idea which also inspired a number of computational models for real numbers. However, current state-of-the-art implementations of real numbers, e.g., Mueller’s iRRAM and Lambov’s RealLib, do not seem to be based on the interval domain. In fact, their authors have observed that domain-theoretic concepts such as monotonicity of functions hinder efficiency of computation. I will review the data structures and algorithms that are used in modern implementations of exact real arithmetic. They provide important insights, but some questions remain about what theoretical models support them, and how we can show them to be correct. It turns out that the correctness is not always clear, and that the good old interval domain still has a few tricks to offer. Synthetic Computability (MFPS XXIII Tutorial) A tutorial presented at the Mathematical Foundations of Programming Semantics XXIII Tutorial Day. Metric Spaces in Synthetic Topology With Davorin Lešnik. Abstract: We investigate the relationship between constructive theory of metric spaces and synthetic topology. Connections between these are established by requiring a relationship to exist between the intrinsic and the metric topology of a space. We propose a non-classical axiom which has several desirable consequences, e.g., that all maps between separable metric spaces are continuous in the sense of metrics, and that, up to topological equivalence, a set can be equipped with at most one metric which makes it complete and separable. Presented at: 3rd Workshop on Formal Topology Implementing real numbers with RZ With Iztok Kavkler. Abstract: RZ is a tool which translates axiomatizations of mathematical structures to program specifications using the realizability interpretation of logic. This helps programmers correctly implement data structures for computable mathematics. RZ does not prescribe a particular method of implementation, but allows programmers to write efficient code by hand, or to extract trusted code from formal proofs, if they so desire. We used this methodology to axiomatize real numbers and implemented the specification computed by RZ. The axiomatization is the standard domain-theoretic construction of reals as the maximal elements of the interval domain, while the implementation closely follows current state-of-the-art implementations of exact real arithmetic. Our results shows not only that the theory and practice of computable mathematics can coexist, but also that they work together harmoniously. Presented at Computability and Complexity in Analysis 2007. On a proof of Cantor’s theorem The famous theorem by Cantor states that the cardinality of a powerset $P(A)$ is larger than the cardinality of $A$. There are several equivalent formulations, and the one I want to consider is Theorem (Cantor): There is no onto map $A \to P(A)$. In this post I would like to analyze the usual proof of Cantor’s theorem and present an insightful reformulation of it which has applications outside set theory. RZ: a tool for bringing constructive and computable mathematics closer to programming practice With Chris Stone. Abstract: Realizability theory is not only a fundamental tool in logic and computability, but also has direct application to the design and implementation of programs: it can produce interfaces for the data structure corresponding to a mathematical theory. Our tool, called RZ, serves as a bridge between the worlds of constructive mathematics and programming. By using the realizability interpretation of constructive mathematics, RZ translates specifications in constructive logic into annotated interface code in Objective Caml. The system supports a rich input language allowing descriptions of complex mathematical structures. RZ does not extract code from proofs, but allows any implementation method, from handwritten code to code extracted from proofs by other tools. Presented at Computablity in Europe 2007.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462489247322083, "perplexity": 869.0268806153122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00435.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/1998_AHSME_Problems/Problem_28
# 1998 AHSME Problems/Problem 28 ## Problem In triangle , angle is a right angle and . Point is located on so that angle is twice angle . If , then , where and are relatively prime positive integers. Find . ## Solution 1 Let , so and . Then, it is given that and Now, through the use of trigonometric identities, . Solving yields that . Using the tangent addition identity, we find that , and and . (This also may have been done on a calculator by finding directly) ## Solution 2 By the application of ratio lemma for , we get , where we let . We already know hence the rest is easy ## Solution 3 Let and . By the Pythagorean Theorem, . Let point be on segment such that bisects . Thus, angles , , and are congruent. Applying the angle bisector theorem on , we get that and . Pythagorean Theorem gives . Let . By the Pythagorean Theorem, . Applying the angle bisector theorem again on triangle , we have The right side simplifies to. Cross multiplying, squaring, and simplifying, we get a quadratic: Solving this quadratic and taking the positive root gives Finally, taking the desired ratio and canceling the roots gives . The answer is .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974967241287231, "perplexity": 723.6625084688535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00491.warc.gz"}
http://math.stackexchange.com/questions/61087/how-would-one-go-about-proving-that-the-rationals-are-not-the-countable-intersec
# How would one go about proving that the rationals are not the countable intersection of open sets? I'm trying to prove that the rationals are not the countable intersection of open sets, but I still can't understand why $$\bigcap_{n \in \mathbf{N}} \left\{\left(q - \frac 1n, q + \frac 1n\right) : q \in \mathbf{Q}\right\}$$ isn't a counter-example. Any ideas? Thanks! - Do you mean to write $\displaystyle\bigcap_{n \in \mathbf{N}} \displaystyle\bigcup_{q\in \mathbb{Q}} (q - 1/n, q + 1/n)$ ? –  jspecter Sep 1 '11 at 1:31 If so then the intersection is equal to $\mathbb{R}.$ –  jspecter Sep 1 '11 at 1:34 If not you are taking the intersection of a bunch of pairwise disjoint sets of open subsets of $\mathbb{R}.$ –  jspecter Sep 1 '11 at 1:35 If by $\{ (q-1/n, q+1/n): q\in \mathbf{Q}\}$ you mean $$\bigcup_{q\in\mathbf{Q}} (q-1/n, 1+1/n)$$ you will find that for each fixed $n$, that set is equal to $\mathbf{R}$, independently of $n$. So the intersection you wrote down is equal to $\mathbf{R}$ The usual proof that $\mathbf{Q}$ is not a countable intersection of open sets uses the Baire Category Theorem. The Irrational numbers can be written as a countable intersection of open, dense subsets: $$\mathbf{R}\setminus \mathbf{Q} = \bigcap_{q\in \mathbf{Q}} \mathbf{R}\setminus\{q\}$$ Since $\mathbf{Q}$ is dense, any open set containing it would necessarily be an open dense subset of $\mathbf{R}$. So if $\mathbf{Q}$ were to be able to be written as a countable intersection $\cap_{n\in \mathbf{N}}A_n$ with each $A_n$ open, $\mathbf{Q}$ would be a countable intersection of open, dense subsets of $\mathbf{R}$. But then we would have $$\emptyset = (\mathbf{R}\setminus\mathbf{Q}) \cap \mathbf{Q}$$ is a countable intersection of open dense subsets, which contradicts the Baire Category Theorem. - What about $\bigcap_{n=1}^{\infty} \bigcup_{k=1}^{\infty} \left( q_k - \frac1{2^{k+n}},q_k + \frac1{2^{k+n}} \right)$? For each fixed $n$, the union is not $\mathbb{R}$ since the length of each union is $\frac1{2^{n-1}}$. –  Adhvaitha Sep 1 '11 at 1:57 Very concise. Thanks! –  Isaac Solomon Sep 1 '11 at 1:58 @Adhvaitha: see the construction given by Robert Israel. –  Willie Wong Sep 1 '11 at 13:29 This generalizes: if $(X,d)$ is a complete metric space with no isolated points and if $D$ is a dense countable subset, then $D$ is not a $G_\delta$. –  Pedro Tamaroff May 31 '14 at 23:23 The right way to do this is via the Baire Category Theorem, but it may be helpful to also give a more-or-less explicit construction. Let $U_n, \ n=1,2,\ldots$ be a sequence of open sets, each containing the rationals $\mathbb Q$. I will find a member of $\bigcap_{n=1}^\infty U_n$ that is not in $\mathbb Q$. Let $r_n, \ n=1,2,\ldots$ be an enumeration of the rationals. I will construct two sequences $a_n$ and $b_n, \ n= 1,2,\ldots$ such that $a_n < a_{n+1} < b_{n+1} < b_n$, $(a_n, b_n) \subset U_n$, and $r_n \notin (a_n, b_n)$. Namely, given $a_n$ and $b_n$, we can find a rational $c \ne r_{n+1}$ in $(a_n, b_n)$ and take $a_{n+1} = c - \delta$ and $b_{n+1} = c + \delta$ where $\delta>0$ is small enough that $(c-\delta, c+\delta) \subset U_{n+1}$, $c - \delta > a_n$, $c + \delta < b_n$ and $\delta < |r_{n+1} - c|$. Now the sequence $a_n$ is bounded above and increasing, so it has a limit $L$. By construction, $L \in (a_n, b_n) \subset U_n$ for each $n$, i.e. $L \in \bigcap_{n=1}^\infty U_n$, Moreover, $L \ne r_n$ for all $n$, so $L \notin \mathbb Q$, as required. - Well done! I think for beginners,an explicit construction-assuming it's not too complicated and ugly-is much more informative then the big machinery of modern analysis. –  Mathemagician1234 Sep 1 '11 at 4:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973176121711731, "perplexity": 138.21767296593154}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00122-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/12750-rational-expressions.html
# Math Help - Rational Expressions 1. ## Rational Expressions What would be the common denominator for these three demoninators 2x, 4, 2x+3 thanks so much....would it just be to multiply all three by each other? 2. Originally Posted by Natalie What would be the common denominator for these three demoninators 2x, 4, 2x+3 thanks so much....would it just be to multiply all three by each other? multiplying each other would be one way to do it, but it's a slight overkill. 4x(2x + 3) would do it, as oppsed to the 8x(2x + 3) you would get if you multiply all three. it's no big deal though, if you use 8x(2x + 3) the extra 2 would eventually cancal out, and maybe more 3. oooh great thanks so much, that helps. so is there any faster ways in simplifying this? 4x --------- (x+5)(x-1) subract 5 --- 4 thanks again 4. Originally Posted by Natalie oooh great thanks so much, that helps. so is there any faster ways in simplifying this? 4x --------- (x+5)(x-1) subract 5 --- 4 thanks again ..........4x .........................5 ...------------.....-..........----- ....(x+5)(x-1).....................4 ...........8x...-...5(x + 5)(x - 1) ..= ..------------------------- .............4(x + 5)(x - 1) ...........8x...-...5(x^2 + 4x - 5) ..= ..--------------------------- .............4(x + 5)(x - 1) ...........8x...-...(5x^2 + 20x - 25) ..= ..--------------------------- ...............4(x + 5)(x - 1) ............-5x^2 -12x + 25 ..= ..--------------------------- ...............4(x + 5)(x - 1)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900083303451538, "perplexity": 3023.964119321574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00065-ip-10-33-131-23.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/69110/intuition-behind-complex-line-integration
# Intuition behind complex line integration The intuition behind the line integral of a function $f$ along a path $\gamma(t)$ on $\mathbb{R}^2$ is clear: it's the area of the surface given by $(t, \gamma(t), f(\gamma(t))$. I was wondering if there's a good way to think about complex line integration as well, and why one might expect such nice results like the Cauchy integral theorem. - 1. No, the line integral is not equal to the area of that surface. 2. The Cauchy integral theorem comes about because of special properties of complex-differentiable functions, not because of how complex line integration works. –  Zhen Lin Oct 1 '11 at 21:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859898686408997, "perplexity": 169.65918251190794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00161-ip-10-180-206-219.ec2.internal.warc.gz"}
https://tsourakakis.com/2016/06/06/privacy-and-1d-histograms/
# Privacy and 1d Histograms Suppose there is an underlying 1 dimensional histogram stored in the cloud. As a concrete example, consider the distribution of bank deposits (x-axis is the amount of dollars, and the y-axis is the count of accounts). For simplicity let’s assume that all the amounts of money deposited are integers within the range $\{1,\ldots,T\}$. The histogram is queried by the bank in the following way: the bank sends a range $(t_i,t_j), t_i\leq t_j$ and the cloud returns the count of accounts with money between $t_i,t_j$. Let’s further assume that the range is selected uar, namely the bank selects the pair of indices $i,j \in [n]$ uniformly at random. how many queries are needed  to reconstruct the histogram? This question is interesting because you can imagine that an eavesdropper can see both the query and the answer returned by the cloud and wishes to infer the histogram. By Chernoff $O(n^2\log{n})$ queries suffice with high probability. However,  $O(n\log{n})$ queries suffice to reconstruct the high probability too. To see why consider a graph where we have $T$ nodes, where the $i$-th node corresponds to the $i$-th prefix sum. Recall that the $i$-th prefix sum $\sum_{j \leq i} y_j$ is the number of bank accounts with less or equal to $i$ dollars.  If we throw, say $2n \log{n}$ edges uar, by Erdős–Rényi the graph will be connected whp. This is enough to reconstruct the histogram.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8554737567901611, "perplexity": 496.9155096432316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650993.91/warc/CC-MAIN-20180324190917-20180324210917-00492.warc.gz"}
http://mathhelpforum.com/calculus/222188-application-integrals.html
1. ## application of integrals let f(x)= (sinx) / x , from 0 which is less than x which is less than or equal to pi 1, when x = 0 show that xf(x) = sinx , for 0 which is less than or equal to x which is less than or equal to pi i multiplied both sinx/x by x and got sinx for 0<x<(less than or equal to) pi and multiplied 1 * x = x which gives me 0 when x=0. is that right because i'm unsure if the question also wants it to be continuous also the second question says that find the volume of the solid generated by revolving f(x) about the y-axis. how am i supposed to calculate that if f(x) is divided into 2 parts 2. ## Re: application of integrals Originally Posted by ubhutto let f(x)= (sinx) / x , from 0 which is less than x which is less than or equal to pi 1, when x = 0 show that xf(x) = sinx , for 0 which is less than or equal to x which is less than or equal to pi i multiplied both sinx/x by x and got sinx for 0<x<(less than or equal to) pi You multiplied both x and what? and multiplied 1 * x = x which gives me 0 when x=0. So what? Why are you concerned with x= 0? ebr is that right because i'm unsure if the question also wants it to be continuous also the second question says that find the volume of the solid generated by revolving f(x) about the y-axis. how am i supposed to calculate that if f(x) is divided into 2 parts If you are taking Caculus, you certainly should know enough algebra to know that if you multiply both sides of f(x)= sin(x)/x by x you will get xf(x)= sin(x). Why do you have any question about that? Now, what do you mean by "f(x) is divided into two parts? Normally, I prefer the "disk method" but here it is "x" that would be the radius and it will be difficult to solve for x as a function of y (the inverse function to f) so I think I would use the "shell method". That is, we imagine taking a thin strip (of thickness "dx") at a given x value, between 0 and [/itex]\pi[/tex] (where f(x)= sin(x)/x= 0) and rotate that strip around the y- axis. It describes a cylinder of radius x and height y= sin(x)/x so a cylinder or area $\pi x^2(sin(x)/x)= \pi x sin(x)$. That gives a thin shell of volume $\pi x sin(x) dx$. Integrate that from x= 0 to $\pi$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502678513526917, "perplexity": 482.81989646432214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00123-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/qm-exercise-problem.521589/
# QM exercise problem 1. Aug 14, 2011 ### saim_ A beam of neutrons with energy E runs horizontally into a crystal. The crystal transmits half the neutrons and deflects the other half vertically upwards. After climbing to height H these neutrons are deflected through 90 degrees onto a horizontal path parallel to the originally transmitted beam. The two horizontal beams now move a distance $L$ down the laboratory, one distance $H$ above the other. After going distance $L$, the lower beam is deflected vertically upwards and is finally deflected into the path of the upper beam such that the two beams are co-spatial as they enter the detector. Given that particles in both the lower and upper beams are in states of well-defined momentum, show that the wavenumbers $k$, $k^{′}$ of the lower and upper beams are related by ${\large k \simeq k' \left( 1- \frac{m_{{\small N}}gH}{2E} \right) }$ Attempted Solution: Let E be the kinetic energy of the neutrons in lower beam. ${\large k = \frac{\sqrt{2mE}}{h}}$ ${\large k' \simeq \frac{\sqrt{2m(E - mgh)}}{h}}$ $\Rightarrow {\large \frac{k}{k'} \simeq \sqrt{\frac{2mE}{2m(E-mgh)}} = \sqrt{\frac{E}{E-mgh}} }$ I have no idea how to go beyond this. 2. Aug 14, 2011 ### xts Is it a yet another most stupidly unrealistic school excercise? Could you tell me what is the crystal having magical power of reflecting half of the incoming neutrons in perfectly known direction perpendicular to the beam? And could you tell me what machine you have able to deflect all neutrons by 90° ? 3. Aug 14, 2011 ### saim_ Its a problem from a textbook; "The Physics of Quantum Mechanics" by James Binney and David Skinner. I think we are supposed to take some careful assumptions and approximations in arriving at the result instead of actually going into the details of a machine that can do the job. 4. Aug 14, 2011 ### xts I don't know that textbook (and it makes me happy!) I just hate such excercises. Primary school excercise about two balls 1kg each elastically colliding at 1000 m/s. That's why kids hate physics at school. You cannot abstract from mechanisms. Especially in QM - are the "crystals" and "deflectors" point-like or have spanning spatially? It changes the analysis dramatically. Anyway - the excercise seems to be wrong even under such "primary school simplifications", as there is no reason why two beams have different k's. Unless, of course, "the crystal" executes some magic lifting particles without any effort to a 2nd floor. 5. Aug 14, 2011 ### saim_ 6. Aug 15, 2011 ### mathfeel I suppose the only point here is that the upper beam has smaller momentum, to be computed using classical formula, that it has a smaller k also. I think one also assumes nonrelativistic neutron. 7. Aug 15, 2011 ### xts But how can they have different momenta as beams finally reach the same place just using different paths in gravitation (potential) field and not being subjected to other forces? The only explanation for that is magic acting in the crystal. 8. Aug 15, 2011 ### vela Staff Emeritus Hint:$$\sqrt{\frac{E}{E-mgh}} = \sqrt{\frac{1}{1-mgh/E}} = \left(1-\frac{mgh}{E}\right)^{-1/2}$$I think there's a sign error in the result you're asked to find. 9. Aug 15, 2011 ### saim_ Those assumptions are considered in my attempted solution above. What should I do next? The final result in the book is neither inverted, nor in the square root like my result (or as you suggest). Also, there is a "2" in the denominator, before E (as given in my first post), of the final result in book but there isn't one in my solution. I don't think all of these are just mistakes in the book. I haven't encountered a single mistake in the book before this one so it seems highly unlikely it will have so many typos or conceptual errors piled up in a single question all of a sudden. 10. Aug 15, 2011 ### vela Staff Emeritus My post was a hint, not the solution. You still have work to do. Clearly k'<k since E-mgh<E, yet k=k'[1-mgh/(2E)] implies k'>k. 11. Aug 16, 2011 ### saim_ Oh, I missed that. Thanks, however, I'm still inclined to believe the book; that k must be less than k' then :D But assuming there is a sign error and we have a plus sign between the two terms, then what? There is still a no square root in the final term and a "2" in the denominator of one of the terms. Do you see a way we can get any of those in place? If it helps the book states that the problem is a variant of the setup in "R. Colella et al., 1975, Phys. Rev. Let., 34, 1472" though I didn't get much help skimming through the paper. Here is the link: http://www.atomwave.org/rmparticle/ao%20refs/aifm%20refs%20sorted%20by%20topic/inertial%20sensing%20refs/gravity/COW75%20neutron%20gravity.pdf [Broken] Last edited by a moderator: May 5, 2017 12. Aug 16, 2011 ### mathfeel With gravitational potential, $k_y$ is no longer a good quantum number. So the only reasonable $k$ to compute at all is $k_x$. 13. Aug 16, 2011 ### vela Staff Emeritus Use the binomial expansion. 14. Aug 16, 2011 ### saim_ Oh! nice.... Now I'm pretty sure that minus sign is a mistake :D Thanks a lot.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027487635612488, "perplexity": 1130.8277617952738}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687582.7/warc/CC-MAIN-20170920232245-20170921012245-00410.warc.gz"}
https://www.physicsforums.com/threads/parabolic-curve-fit-to-x-y-data.405443/
# Parabolic curve fit to x,y data 1. May 24, 2010 ### jkristoff 1. The problem statement, all variables and given/known data Given a series of measured data points (>1000) x,y find the best fit parabolic curve where the constant A (below) is given. 2. Relevant equations General 2nd deg equation describes conic sections: Ax2+Bxy+Cy2+Dx+Ey+F=0 for a parabola B^2=4AC, AND in this case B<>0 (ie: the parabola is rotated) 3. The attempt at a solution I've looked at orthogonal distance regression algorithms like ODRPACK, but I don't know where to begin with the inputs. What makes this more challenging than simply doing a trendline in Excel is that the parabola is rotated (B<>0) and I need to be able to constrain the first term constant A to a known value. To minimize the orthogonal distances while knowing A, the best fit parabola needs to be rotated and translated. JK 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. May 26, 2010 ### hotvette Intriguing problem, I've been pondering it for a couple of days. Sorry to say I don’t necessarily have answers but here are some thoughts that might stimulate discussion. - Not sure why orthogonal distance regression is needed. Ordinary least squares should be able to be used on a problem like this. Nothing wrong with using ODR but the motivation to do so isn't obvious. - I tried ODRPACK once but couldn’t figure it out either. Didn’t try very hard though. Solving problems using first principles is much more fun and educational (if you have the interest and time) than using canned programs. - This could be viewed as a linear least squares problem with a single nonlinear constraint (i.e. B^2 = 4AC) - Developing the governing equations for least squares with equality constraints is actually quite straightforward (even for parametric equations). - One possible approach (since A is fixed) is to say Bxy+Cy2+Dx+Ey+F = -Ax2 and just treat the set of linear equations as a black box least squares problem (i.e. you have a known matrix A with more rows than columns, a known vector b, and an unknown vector x) and solve for the coefficients, though it isn’t clear to me what that solution actually represents (i.e. what is really being minimized). Seems to me the constraint could also be included. - To me the most fundamental approach is to represent the problem parametrically (i.e. x = f(t) and y=g(t)) but it isn’t clear (to me) how to represent a general conic parametrically and how the coefficients of the parametric equations relate to A,B,C,D,E,F. - Another approach that might bear fruit (you alluded to it) is to take a normal parabola and apply a rotation and translation. You'd want to be careful to understand what the resulting least squares problem really represents. - Might be helpful to know what kind of class this is for and how much effort is expected. Is this suppose to be a 1hr hw problem or a two week project? -------------- EDIT --------------------------------------------------------------- Did a little research. http://research.microsoft.com/en-us/um/people/awf/ellipse/ellipse-pami.pdf Turns out that given a general conic f = Ax2+Bxy+Cy2+Dx+Ey+F= 0, minimizing the equation $\sum f^2$ is considered minimizing the squared 'algebraic' distances from the points to the curve. By including the equality constraint B2-4AC = 0 via Lagrange Multiplier, the problem can be solved in a straightforward fashion as a nonlinear least squares problem subject to a single equality constraint. I made up a problem by rotating a parabola (with 1000 pts), added random noise to the points, set A=1, and solved for B, C, D, E, and F. Convergence was achieved in maybe a dozen iterations. All done wtih Excel using matrix functions (not very sexy, but gets the job done most of the time) Last edited: May 27, 2010 3. May 28, 2010 ### hotvette After a bit more digging there appear to be two common approaches to least squares fitting of general conics: 1. Minimizing the sum of 'algebraic' distances as described in the edit of my previous post (though I still don't understand what an algebraic distance really is). In this specific problem a nonlinear constraint B2-4AC=0 needs to be included to guarantee that the solution is a parabola. I tried this method on a made up problem and it worked fine. 2. Minimizing the sum of 'geometric' distances, which appear to me to be orthogonal distances. The technique is to fit an ordinary parabola to rotated (and potentially translated) data points. Orthogonal least squares makes sense because the orthogonal distances are preserved in the 'rotated' space, whereas vertical distances (i.e. ordinary least squares) in the normal space make little sense in the rotated space. The problem is nonlinear because the angle is unknown and needs to be determined. I also tried this method on a made up problem and it worked fine (though I still don't know how to analytically determine the coefficients of the general conic using the angle and coefficients of the normal parabola). Though I couldn't find any references in internet searches, there is I believe a third valid option. Minimize the distances in the normal space that are neither vertical nor orthogonal but will result in vertical distances in the rotated space (essentially an ordinary least squares solution in the rotated space). I think this could be done by adding constraints. Food for thought. Last edited: May 28, 2010 4. May 28, 2010 ### jkristoff Thanks for your help!!! I am trying your first-reply suggestion using matrices in excel, but having trouble so far. How did you include the equality constraint? Regarding the second-post approach#2, you can rotate a normal parabola (B=0) symmetrical about the y-axis centered at the origin using COT(2p)=(a-c)/b where a,b and c are coefficients of the rotated parabola (0<p<PI). Can you explain a bit further how you set up the made up problem for the orthogonal distance approach? Thanks, JK 5. May 28, 2010 ### hotvette The answer depends on how much you know about least squares. If you know how to derive the 'normal' equations associated with linear least squares, adding equality constraints is just an extension by using Langrange Multipliers to alter the equation being minimized. Your case is more complicated because the constraint is nonlinear. Thus, the solution is iterative by guessing at the initial values of the unknowns, solving a 'linearlized' set of equations (including the linearized constraint equations), updating the values of the unknowns, and iterating until desired convergence is achieved. The attachments shows the linear and nonlinear situations. #### Attached Files: File size: 12.6 KB Views: 211 • ###### NLCLS.jpg File size: 21.1 KB Views: 196 Last edited: May 28, 2010 6. May 28, 2010 ### hotvette Orthogonal Least Squares is inherently nonlinear because the values of x that result in an orthogonal vector from the curve to the data point is unknown. This particular problem is more complicated in that you are first rotating the data points (call them x' and y') through an unknown angle t (thus t needs to be included as an unknown). http://en.wikipedia.org/wiki/Rotation_matrix x* = x'*cos(t) - y'*sin(t) ....data point y* = x'*sin(t) + y'*cos(t) ....data point In orthogonal least squares we minimize the sum of the squared distances from the data points to the curve: min $\sum$ [ (delta y)2 + (delta x)2 ] = min $\sum$ [ f2 + g2 ] where f = ax2 + bx + c - y* g = x - x* Making substitutions, we get: f = ax2 + bx + c - x'*sin(t) - y'*cos(t) g = x - x'*cos(t) + y'*sin(t) We next linearize f & g (calling the linearized functions f*and g*) by taking the first two terms of the Taylor series: $$f* = f_0 + (\partial f / \partial a) \Delta a + (\partial f / \partial b) \Delta b + (\partial f / \partial c) \Delta c + (\partial f / \partial x) \Delta x + (\partial f / \partial t) \Delta t$$ $$g* = g_0 + (\partial g / \partial x) \Delta x + (\partial g / \partial t) \Delta t$$ resulting in the linearized least squares problem $$min \text{ }F = \sum [ (f*)^2 + (g*)^2 ]$$ To minimize F, take partial derivatives of F with respect to each variable (a,b,c,t,xi) and set equal to zero. The math is straightforward but tedius. If none of this or the previous post is familiar to you, I'd wonder why such a homework problem would be given. Tools are needed to solve problems. Last edited: May 28, 2010 7. May 29, 2010 ### jkristoff Thank you very much for your help thus far. This is not to be considered a 1 hour homework problem. I'm involved in an independent study with our student group. We are trying the characterize shapes of liquids measured in rotating containers. The x,y data is generated from a non-contact laser displacement probe. All of what you've mentioned is familiar to me; just not VERY familiar. I'm the old-timer of the group and so the math is a bit rusty at the moment (my old calculus book sits at the bedside table). I've been charged to handle to task of the algorithm because I have the engineering education. I hope you don't mind that this is not a 1 hour homework problem. The forum guidelines states that questions regarding homework OR independent study may be posted here. For the moment, I need to catch up with you regarding your last two posts. Thanks again, JK
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015509843826294, "perplexity": 652.2546017085012}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00654.warc.gz"}
https://www.researchgate.net/figure/The-geometrical-parameters-of-the-pointed-arch_fig1_322639249
Figure - available from: Archive of Applied Mechanics This content is subject to copyright. Terms and conditions apply. # The geometrical parameters of the pointed arch Source publication Article Full-text available The paper investigates how the shape affects the Couplet–Heyman minimum thickness of the masonry pointed arch. The minimum thickness is such a structural thickness, at which a vault made of rigid voussoirs is stable for self-weight. It is expressed as a function of the pointed generator curve’s deviation from the semicircle. The arch is analysed in... ## Similar publications Article Full-text available The discrete element method (DEM) can be effectively used in investigations of the deformations and failures of jointed rock slopes. However, when to appropriately terminate the DEM iterative process is not clear. Recently, a displacement-based discrete element modeling method for jointed rock slopes was proposed to determine when the DEM iterative... ## Citations ... Moreover, Galassi et al. [12] studied the vulnerability of masonry pointed arches subjected to support displacements through a novel numerical approach that uses combinatorial laws and adopts Heyman's assumptions. Further studies on pointed arches have been summarized and presented by Lengyel [13]. ... Article Pointed arches are important architectural elements of both western and eastern historical built heritage. In this paper, the effects that different geometrical (slenderness and sharpness) and mechanical (friction and cohesion) parameters have on the in-plane structural response of masonry pointed arches are investigated through the implementation of an upper bound limit analysis approach capable of representing sliding between rigid blocks. Results, in terms of collapse multipliers are presented and quantitatively analyzed following a systematic statistical approach, whereas collapse mechanisms are qualitatively explored and three different outcomes are found; pure rotation, pure sliding and mixed collapse mechanisms. It is concluded that the capacity of the numerical approach implemented to reproduce sliding between blocks plays a major role in the better understanding of masonry pointed arches structural response. ... Within a previous mainstream of work by the present authors [7][8][9][10][11][12][13][14][15], such a classical optimization problem in the Mechanics (statics) of symmetric circular masonry arches has been revisited, and framed within the relevant, now updated, literature [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34], by providing new, explicit analytical derivations and representations of the solution characteristics. Specifically, classical Heyman solution [3] has been shown to constitute a sort of approximation of the true solution (here labelled as "CCR" [7]), in Heyman assumption of uniform selfweight distribution (location of the centres of gravity along the geometrical centreline of the circular arch), while Milankovitch solution [35][36][37] may as well be consistently derived, in the consideration of the true self-weight distribution, though at the price of an increasing complexity in the explicit analytical handling of the governing equations (here newly resolved to a very end). ... ... As per the treatment adopted in the paper, namely a full analytical one, the present study is framed between similar related investigations that have enquired the problem by analytical or semi-analytical approaches, between which specific important developments that shall be mentioned in sharing the vision and perspectives here pursued are those concisely outlined as follows: pioneering and fundamental Heyman contributions [1][2][3][4][5][6]; attempts by the present authors [7][8][9][10][11][12][13][14][15], specifically: analytical [7], analytical-numerical, accompanied by a Discrete Element Method (DEM) investigation (Discontinuous Deformation Analysis, (DDA) [8,9,15], and including reducing friction effects and resulting mixed collapse modes [10,11]; analytical-numerical, by a Complementarity Problem/Mathematical Programming formulation truly accounting for finite friction implications [12][13][14][15]; other studies on mainstream arch derivation and masonry arch mechanics after Heyman, specifically considering the issue of least thickness [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]; Milankovitch formulation, as a cornerstone of thrust-line analysis [35][36][37]; modern and recent thrust-line-like analyses, particularly in view of form optimization [38][39][40][41][42][43][44]. ... ... Least-thickness symmetric circular masonry arch A further wider literature framing within the vast subject of the analysis of masonry arches has already been provided in [7], and therein quoted references, with the integration of the above-quoted updates. Thus, the here-mentioned literature setting does not seek an intent of completeness, just that of focusing on selfcoherent formulations that may consider further related, though different and complementary features, such as: different geometrical shapes of the masonry arch [22,25,[30][31][32]34,39,41], issues of "stereotomy" (i.e., the shape of the cut of the masonry blocks; here, a continuous circular arch with just potential rupture radial joints is considered) [6,28,33], dedicated use of "thrust-line analysis" or graphical-analytical methods, and so on, though all sharing the common characteristics of seeking and setting an analytical method of analysis, or being linked to the optimization target of acquiring the least-thickness condition. Numerical methods, which are massively employed in the analysis of masonry structures, including for masonry arches (vaults and domes), shall also be pursued and mentioned (see, for instance, about the adoption of the Discrete Element Method for the evaluation of the least-thickness condition [8,9,15,30,31], and therein quoted references), but these are here intentionally taken out from an explicit discussion. ... Article Full-text available This analytical note shall provide a contribution to the understanding of general principles in the Mechanics of (symmetric circular) masonry arches. Within a mainstream of previous research work by the authors (and competent framing in the dedicated literature), devoted to investigate the classical structural optimization problem leading to the least-thickness condition under self-weight (“Couplet-Heyman problem”), and the relevant characteristics of the purely rotational five-hinge collapse mode, new and complementary information is here analytically derived. Peculiar extremal conditions are explicitly inspected, as those leading to the maximum intrinsic non-dimensional horizontal thrust and to the foremost wide angular inner-hinge position from the crown, both occurring for specific instances of over-complete (horseshoe) arches. The whole is obtained, and confronted, for three typical solution cases, i.e., Heyman, “CCR” and Milankovitch instances, all together, by full closed-form explicit representations, and elucidated by relevant illustrations. ... The results obtained in Section 5.1 corroborate the qualitative findings in [30], where a 2D analytical analysis using thrust lines showed the same 7-hinge mechanism for the optimal radius over length encountered. In [30], for β = 0 • , the optimal r/l 0 = 0.94 was obtained, corresponding to a minimum thickness equal to 3.7% of the span. ... ... The results obtained in Section 5.1 corroborate the qualitative findings in [30], where a 2D analytical analysis using thrust lines showed the same 7-hinge mechanism for the optimal radius over length encountered. In [30], for β = 0 • , the optimal r/l 0 = 0.94 was obtained, corresponding to a minimum thickness equal to 3.7% of the span. In this section, the last arch of the fan-system is analysed separately, i.e., the arch on the unsupported boundary edge. ... ... This analysis yielded the same hinge pattern and minimum thickness as the one obtained for the three-dimensional problem. It, however, differs slightly to the values from [30], due to the different load distribution considered. The solution of the 3D, and 2D problem, for β = 20 • are presented in Figure 7, showing the 7-hinge pattern. ... Article Full-text available This paper presents a parametric stability study of groin, or cross vaults, a structural element widely used in old masonry construction, particularly in Gothic architecture. The vaults’ stability is measured using the geometric safety factor (GSF), computed by evaluating the structure’s minimum thickness through a thrust network analysis (TNA). This minimum thickness is obtained by formulating and solving a specific constrained nonlinear optimisation problem. The constraints of this optimisation enforce the limit analysis’s admissibility criteria, and the equilibrium is calculated using independent force densities on a fixed horizontal projection of the thrust network. The parametric description of the vault’s geometry is defined with respect to the radius of curvature of the vault and its springing angle. This detailed parametric study allows identifying optimal parameters which improve the vaults’ stability, and a comprehensive comparison of these results was performed with known results available for two-dimensional pointed arches. Moreover, an investigation of different force flows represented by different form diagrams was performed, providing a better understanding of the vaults’ structural behaviour, and possible collapse mechanisms were studied by observing the points where the thrust network touches the structural envelope in the limit states. Beyond evaluating the GSF, the groin vault’s stability domain was described to give additional insights into the structural robustness. Finally, this paper shows how advances in equilibrium methods can be useful to understand and assess masonry groin vaults. ... If the no sliding assumption is relaxed, the arch can also fail due to a mixed (with rotating and sliding joints) and a pure-sliding failure mode (see also Fig. 12). We refer to e.g., Bagi (2014), Aita Lengyel (2018) for an extended description of on non-Heymanian collapse modes. In the present paper, we only consider symmetrical failure modes (due to the symmetry of the problem). ... Article Full-text available Friction is much needed for the equilibrium of masonry arches as it transfers load between the voussoirs. In this paper, applying an analytical formulation of the problem, the angle of friction as a geometric constraint on the stereotomy (bricklaying pattern) is investigated to find the possible range of minimum thickness values of circular and elliptical masonry arches under static loads based on the lower bound theorem of limit state analysis. The Heymanian assumptions regarding material qualities are adopted; however, limited capacity in friction is accounted for. It has been shown earlier that considering stereotomies a-priori unknown, a considerably wide range of minimum thickness values is obtained for fixed loading and global geometry conditions. It is found that a stereotomy constrained by an angle of friction, characteristic of masonry, renders the effect of stereotomy on the minimum thickness value negligible because the range of minimum thickness values is significantly reduced in this case. Hence, the present study ultimately justifies the intuitive assumption of radial stereotomy, widely used in the literature, whenever the safety of masonry arches is studied. ... As is well known, such arches are structures of great architectonical interest, widely studied also in the recent literature on the mechanical behaviour of masonry structures. In particular, without pretending to be complete, we recall that the search for the minimum thickness of these types of arches has been tackled -trough different approaches -by Heyman (1966Heyman ( , 1969, Sinopoli et al. (1997), and more recently, Romano and Ochsendorf (2010), Alexakis and Makris (2015), Cavalagli et al. (2016), Nikolić (2017), Lengylel (2018), Aita et al. (2019) and Cocchetti and Rizzi (2020). In the cited contributions, the stability of the arch is studied by considering an infinite or a finite value of the friction coefficient, by assuming as known the direction of the joints (radial or perpendicular to the intrados line). ... Article This research is inspired by an issue strictly linked to the art of stereotomy and rarely tackled in the contributions on the statics of arches and vaults, i.e., the search of the inclination to be assigned to each joint able to ensure the respect of the equilibrium conditions when the friction between the voussoirs is absent, by assuming that the intrados and extrados curves are known. After presenting some brief notes on the state-of-the-art on this subject, both a numerical and an analytical approach, based on the maxima and minima Coulomb method revisited through a re-edition of Durand-Clayes method, is developed in order to determine the inclination of the joints as well as the minimum archs thickness compatible with equilibrium. The analysis is performed for frictionless pointed and circular arches for different values of the embrace angle (i.e., the complement of springing angle). ... A least-thickness self-standing evaluation of the masonry arch may be elaborated by using Discrete Element Method (DEM) quasi-static simulations of discrete voussoir arches [36][37][38]. To provide an independent numerical validation of the achieved analytical results, an available Discontinuous Deformation Analysis (DDA) tool was adopted in [8,9], to deliver the estimates of the critical thickness and the appearance of the corresponding collapse mode (notice that the five-hinge purely rotational collapse mode is assumed from scratch, in the analytical analysis, while in such a case is numerically evaluated, out of the analysis). ... Article Full-text available This paper re-considers a recent analysis on the so-called Couplet–Heyman problem of least-thickness circular masonry arch structural form optimization and provides complementary and novel information and perspectives, specifically in terms of the optimization problem, and its implications in the general understanding of the Mechanics (statics) of masonry arches. First, typical underlying solutions are independently re-derived, by a static upper/lower horizontal thrust and a kinematic work balance, stationary approaches, based on a complete analytical treatment; then, illustrated and commented. Subsequently, a separate numerical validation treatment is developed, by the deployment of an original recursive solution strategy, the adoption of a discontinuous deformation analysis simulation tool and the operation of a new self-implemented Complementarity Problem/Mathematical Programming formulation, with a full matching of the achieved results, on all the arch characteristics in the critical condition of minimum thickness. ... Fraddosio et al. [23] proposed a computational equilibrium analysis approach using the optimization method and applied it to the three-dimensional masonry vaults. Lengyel [24] investigated the effect of shape on the Couplet-Heyman minimum thickness of a masonry pointed arch using the optimization method, and compared it with the results by the discrete element method. Ricci et al. [25] proposed a numerical method for determining the admissible thrust curves of masonry arches under arbitrary loads. ... Article In this paper a novel method for the shape optimization of tapered arches subjected to in-plane gravity (self-weight) and horizontal loading through compressive internal loading is presented. The arch is discretized into beam elements, and axial deformation is assumed to be small. The curved shape of the tapered arch is discretized into a centroidal B-spline curve with beam elements. Constraints are imposed for allowable axial force and bending moment in the arch so that only compressive stress exists in the section. The computational cost for optimization is reduced, and the convergence property is improved by considering the locations of the control points of B-spline curves as design variables. The height of section is also modeled using a B-spline function. A section update algorithm is introduced in the optimization procedure to account for the contact and separation phenomena and to further speed up the computation process. The objective function to minimize the total strain energy of the arch under self-weight and horizontal loading. Numerical examples are presented that demonstrate the effectiveness of the proposed method. To validate the findings, the properties of the obtained optimal shapes are compared to shapes obtained by a graphic statics approach. ... The semicircular arch with its respective types, see Figure 1A, such as the stilted arch, Figure 1B, the segmental arch, Figure 1C, or the arch which is attached to the barrel vault of a nave to reinforce it and divides it into sections, were also characteristic of Romanesque architecture [4]. However, it lost importance in the Gothic, which gave way to the three-pointed arch, Figure 1E, with its different variants, as an acute arch, Figure 1G, or the depressed arch, Figure 1H [5]. ... Article Full-text available The arches were a great advance in construction with respect to the rigid Greek linteled architecture. Its development came from the hand of the great Roman constructions, especially with the semicircular arch. In successive historical periods, different types of arches have been emerging, which in addition to their structural function was taking aesthetic characteristics that are used today to define the architectural style. When, in the construction of a bow, the rise is less than half the springing line, the semicircular arch is no longer used and the segmental arch is used, and then on to another more efficient and aesthetic arch, the basket-handle arch. This study examines the classic geometry of the basket-handle arch also called the three-centered arch. A solution is proposed from a constructive and aesthetic point of view, and this is approached both geometrically and analytically, where the relationship between the radius of the central arch and the radius of the lateral arch is minimized. The solution achieved allows the maximum springing line or clear span to be saved with the minimum rise that preserves the aesthetic point of view, since the horizontal thrust of a bow is greater than the relationship between the springing line of the arch and the rise. This solution has been programmed and the resulting software has made it possible to analyse existing arches in historic buildings or constructions to check if their solutions were close or not from both points of view. Thus, it has been possible to verify that in most of the existing arches analyzed, the proposed solution is reached. ... Within such a framework, classical problems as that of finding the critical condition of least thickness for a circular masonry arch under self-weight (Couplet-Heyman problem) have found further consistent solutions by different analytical and numerical methodologies. For instance, worthwhile to be mentioned shall be contributions (Como, 1992;Blasi and Foraboschi, 1994;Boothby, 1994;Lucchesi et al., 1997;Foce, 2007;Ochsendorf, 2002;Block et al., 2006;Ochsendorf, 2006;Romano and Ochsendorf, 2010;Gago, 2004;Gago et al., 2011;Rizzi et al., 2010;Cocchetti et al., 2012;Rizzi et al., 2012Rizzi et al., , 2014Rizzi, 2018, 2019;Aita et al., 2012Aita et al., , 2019Makris and Alexakis, 2013;Makris, 2013, 2015;Cavalagli et al., 2016;Nikolić, 2017;Angelillo, 2019;Gáspár et al., 2018;Bagi, 2014;Lengyel, 2018;Auciello, 2019;Sinopoli et al., 1997;Casapulla and Lauro, 2000;Gilbert et al., 2006;Casapulla and D'Ayala, 2001;D'Ayala and Casapulla, 2001;D'Ayala and Tomasoni, 2011;Smars, 2000Smars, , 2008Smars, , 2010Olivito et al., 2016;Portioli et al., 2014;Cascini et al., 2016;Milani and Tralli, 2012;Sacco, 2012;Baggio and Trovalusci, 2000;Beatini et al., 2017Beatini et al., , 2019Angelillo et al., 2018;Tempesta and Galassi, 2019;Trentadue and Quaranta, 2013). Specifically, Como (1992), Blasi and Foraboschi (1994), Boothby (1994), Lucchesi et al. (1997), Foce (2007), Ochsendorf (2002), Block et al. (2006), Ochsendorf (2006), Romano and Ochsendorf (2010), Gago (2004), Gago et al. (2011), Rizzi et al. (2010), Cocchetti et al. (2012), Rizzi et al. (2012Rizzi et al. ( , 2014, Rizzi (2018, 2019), Aita et al. (2012Aita et al. ( , 2019, Makris and Alexakis (2013), Makris (2013, 2015), Cavalagli et al. (2016), Nikolić (2017), Angelillo (2019), Gáspár et al. (2018), Bagi (2014), Lengyel (2018) and Auciello (2019) concern the general analysis of masonry arches, possibly in conjunction with the determination of the critical least-thickness condition and considering different issues, such as various arch shapes (e.g., pointed, oval, elliptical, etc.) and possible variable stereotomy of the arch blocks (differing from that of classical radial joints, as here analysed). ... ... For instance, worthwhile to be mentioned shall be contributions (Como, 1992;Blasi and Foraboschi, 1994;Boothby, 1994;Lucchesi et al., 1997;Foce, 2007;Ochsendorf, 2002;Block et al., 2006;Ochsendorf, 2006;Romano and Ochsendorf, 2010;Gago, 2004;Gago et al., 2011;Rizzi et al., 2010;Cocchetti et al., 2012;Rizzi et al., 2012Rizzi et al., , 2014Rizzi, 2018, 2019;Aita et al., 2012Aita et al., , 2019Makris and Alexakis, 2013;Makris, 2013, 2015;Cavalagli et al., 2016;Nikolić, 2017;Angelillo, 2019;Gáspár et al., 2018;Bagi, 2014;Lengyel, 2018;Auciello, 2019;Sinopoli et al., 1997;Casapulla and Lauro, 2000;Gilbert et al., 2006;Casapulla and D'Ayala, 2001;D'Ayala and Casapulla, 2001;D'Ayala and Tomasoni, 2011;Smars, 2000Smars, , 2008Smars, , 2010Olivito et al., 2016;Portioli et al., 2014;Cascini et al., 2016;Milani and Tralli, 2012;Sacco, 2012;Baggio and Trovalusci, 2000;Beatini et al., 2017Beatini et al., , 2019Angelillo et al., 2018;Tempesta and Galassi, 2019;Trentadue and Quaranta, 2013). Specifically, Como (1992), Blasi and Foraboschi (1994), Boothby (1994), Lucchesi et al. (1997), Foce (2007), Ochsendorf (2002), Block et al. (2006), Ochsendorf (2006), Romano and Ochsendorf (2010), Gago (2004), Gago et al. (2011), Rizzi et al. (2010), Cocchetti et al. (2012), Rizzi et al. (2012Rizzi et al. ( , 2014, Rizzi (2018, 2019), Aita et al. (2012Aita et al. ( , 2019, Makris and Alexakis (2013), Makris (2013, 2015), Cavalagli et al. (2016), Nikolić (2017), Angelillo (2019), Gáspár et al. (2018), Bagi (2014), Lengyel (2018) and Auciello (2019) concern the general analysis of masonry arches, possibly in conjunction with the determination of the critical least-thickness condition and considering different issues, such as various arch shapes (e.g., pointed, oval, elliptical, etc.) and possible variable stereotomy of the arch blocks (differing from that of classical radial joints, as here analysed). Moreover, Sinopoli et al. (1997), Casapulla and Lauro (2000), Gilbert et al. (2006), Casapulla and D'Ayala (2001), D'Ayala, and , D'Ayala and Tomasoni (2011), Smars (2000Smars ( , 2008Smars ( , 2010 and Olivito et al. (2016) particularly deal with the issue of finite friction, and its implications in the statics of masonry arches and the possible failure modes. ... ... Within such a framework, classical problems as that of finding the critical condition of least thickness for a circular masonry arch under self-weight (Couplet-Heyman problem) have found further consistent solutions by different analytical and numerical methodologies. For instance, worthwhile to be mentioned shall be contributions (Como, 1992;Blasi and Foraboschi, 1994;Boothby, 1994;Lucchesi et al., 1997;Foce, 2007;Ochsendorf, 2002;Block et al., 2006;Ochsendorf, 2006;Romano and Ochsendorf, 2010;Gago, 2004;Gago et al., 2011;Rizzi et al., 2010;Cocchetti et al., 2012;Rizzi et al., 2012Rizzi et al., , 2014Rizzi, 2018, 2019;Aita et al., 2012Aita et al., , 2019Makris and Alexakis, 2013;Makris, 2013, 2015;Cavalagli et al., 2016;Nikolić, 2017;Angelillo, 2019;Gáspár et al., 2018;Bagi, 2014;Lengyel, 2018;Auciello, 2019;Sinopoli et al., 1997;Casapulla and Lauro, 2000;Gilbert et al., 2006;Casapulla and D'Ayala, 2001;D'Ayala and Casapulla, 2001;D'Ayala and Tomasoni, 2011;Smars, 2000Smars, , 2008Smars, , 2010Olivito et al., 2016;Portioli et al., 2014;Cascini et al., 2016;Milani and Tralli, 2012;Sacco, 2012;Baggio and Trovalusci, 2000;Beatini et al., 2017Beatini et al., , 2019Angelillo et al., 2018;Tempesta and Galassi, 2019;Trentadue and Quaranta, 2013). Specifically, Como (1992), Blasi and Foraboschi (1994), Boothby (1994), Lucchesi et al. (1997), Foce (2007), Ochsendorf (2002), Block et al. (2006), Ochsendorf (2006), Romano and Ochsendorf (2010), Gago (2004), Gago et al. (2011), Rizzi et al. (2010), Cocchetti et al. (2012), Rizzi et al. (2012Rizzi et al. ( , 2014, Rizzi (2018, 2019), Aita et al. (2012Aita et al. ( , 2019, Makris and Alexakis (2013), Makris (2013, 2015), Cavalagli et al. (2016), Nikolić (2017), Angelillo (2019), Gáspár et al. (2018), Bagi (2014), Lengyel (2018) and Auciello (2019) concern the general analysis of masonry arches, possibly in conjunction with the determination of the critical least-thickness condition and considering different issues, such as various arch shapes (e.g., pointed, oval, elliptical, etc.) and possible variable stereotomy of the arch blocks (differing from that of classical radial joints, as here analysed). ... ... For instance, worthwhile to be mentioned shall be contributions (Como, 1992;Blasi and Foraboschi, 1994;Boothby, 1994;Lucchesi et al., 1997;Foce, 2007;Ochsendorf, 2002;Block et al., 2006;Ochsendorf, 2006;Romano and Ochsendorf, 2010;Gago, 2004;Gago et al., 2011;Rizzi et al., 2010;Cocchetti et al., 2012;Rizzi et al., 2012Rizzi et al., , 2014Rizzi, 2018, 2019;Aita et al., 2012Aita et al., , 2019Makris and Alexakis, 2013;Makris, 2013, 2015;Cavalagli et al., 2016;Nikolić, 2017;Angelillo, 2019;Gáspár et al., 2018;Bagi, 2014;Lengyel, 2018;Auciello, 2019;Sinopoli et al., 1997;Casapulla and Lauro, 2000;Gilbert et al., 2006;Casapulla and D'Ayala, 2001;D'Ayala and Casapulla, 2001;D'Ayala and Tomasoni, 2011;Smars, 2000Smars, , 2008Smars, , 2010Olivito et al., 2016;Portioli et al., 2014;Cascini et al., 2016;Milani and Tralli, 2012;Sacco, 2012;Baggio and Trovalusci, 2000;Beatini et al., 2017Beatini et al., , 2019Angelillo et al., 2018;Tempesta and Galassi, 2019;Trentadue and Quaranta, 2013). Specifically, Como (1992), Blasi and Foraboschi (1994), Boothby (1994), Lucchesi et al. (1997), Foce (2007), Ochsendorf (2002), Block et al. (2006), Ochsendorf (2006), Romano and Ochsendorf (2010), Gago (2004), Gago et al. (2011), Rizzi et al. (2010), Cocchetti et al. (2012), Rizzi et al. (2012Rizzi et al. ( , 2014, Rizzi (2018, 2019), Aita et al. (2012Aita et al. ( , 2019, Makris and Alexakis (2013), Makris (2013, 2015), Cavalagli et al. (2016), Nikolić (2017), Angelillo (2019), Gáspár et al. (2018), Bagi (2014), Lengyel (2018) and Auciello (2019) concern the general analysis of masonry arches, possibly in conjunction with the determination of the critical least-thickness condition and considering different issues, such as various arch shapes (e.g., pointed, oval, elliptical, etc.) and possible variable stereotomy of the arch blocks (differing from that of classical radial joints, as here analysed). Moreover, Sinopoli et al. (1997), Casapulla and Lauro (2000), Gilbert et al. (2006), Casapulla and D'Ayala (2001), D'Ayala, and , D'Ayala and Tomasoni (2011), Smars (2000Smars ( , 2008Smars ( , 2010 and Olivito et al. (2016) particularly deal with the issue of finite friction, and its implications in the statics of masonry arches and the possible failure modes. ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561488032341003, "perplexity": 4238.998418916029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00739.warc.gz"}
http://bsdupdates.com/error-propagation/propagate-error.php
Home > Error Propagation > Propagate Error # Propagate Error ## Contents R., 1997: An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. 2nd ed. References Skoog, D., Holler, J., Crouch, S. Now a repeated run of the cart would be expected to give a result between 36.1 and 39.7 cm/s. This is the most general expression for the propagation of error from one set of variables onto another. useful reference In this case, expressions for more complicated functions can be derived by combining simpler functions. The fractional indeterminate error in Q is then 0.028 + 0.0094 = 0.122, or 12.2%. Or in matrix notation, f ≈ f 0 + J x {\displaystyle \mathrm σ 6 \approx \mathrm σ 5 ^ σ 4+\mathrm σ 3 \mathrm σ 2 \,} where J is We say that "errors in the data propagate through the calculations to produce error in the result." 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect Read More Here ## Error Propagation Calculator Then the displacement is: Dx = x2-x1 = 14.4 m - 9.3 m = 5.1 m and the error in the displacement is: (0.22 + 0.32)1/2 m = 0.36 m Multiplication Measurements Lab 21.845 προβολές 5:48 Uncertainty estimates for physics labs - Διάρκεια: 14:26. Summarizing: Sum and difference rule. Khan Academy 501.848 προβολές 15:15 Error propagation - Διάρκεια: 10:29. For example, the rules for errors in trigonometric functions may be derived by use of the trigonometric identities, using the approximations: sin θ ≈ θ and cos θ ≈ 1, valid are inherently positive. If you measure the length of a pencil, the ratio will be very high. Error Propagation Definition Resistance measurement A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm's law, R ISBN0470160551.[pageneeded] ^ Lee, S. In matrix notation, [3] Σ f = J Σ x J ⊤ . {\displaystyle \mathrm {\Sigma } ^{\mathrm {f} }=\mathrm {J} \mathrm {\Sigma } ^{\mathrm {x} }\mathrm {J} ^{\top }.} That When two quantities are divided, the relative determinate error of the quotient is the relative determinate error of the numerator minus the relative determinate error of the denominator. We'd have achieved the elusive "true" value! 3.11 EXERCISES (3.13) Derive an expression for the fractional and absolute error in an average of n measurements of a quantity Q when IIT-JEE Physics Classes 834 προβολές 8:52 Measurements, Uncertainties, and Error Propagation - Διάρκεια: 1:36:37. Error Propagation Inverse General functions And finally, we can express the uncertainty in R for general functions of one or mor eobservables. You can change this preference below. Κλείσιμο Ναι, θέλω να τη κρατήσω Αναίρεση Κλείσιμο Αυτό το βίντεο δεν είναι διαθέσιμο. Ουρά παρακολούθησηςΟυράΟυρά παρακολούθησηςΟυρά Κατάργηση όλωνΑποσύνδεση Φόρτωση... Ουρά παρακολούθησης Ουρά __count__/__total__ Propagation See Ku (1966) for guidance on what constitutes sufficient data. 1. This forces all terms to be positive. 2. The exact covariance of two ratios with a pair of different poles p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} is similarly available.[10] The case of the inverse of a 3. Wolfram Demonstrations Project» Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. 4. Journal of Research of the National Bureau of Standards. ## Error Propagation Physics It's a good idea to derive them first, even before you decide whether the errors are determinate, indeterminate, or both. Define f ( x ) = arctan ⁡ ( x ) , {\displaystyle f(x)=\arctan(x),} where σx is the absolute uncertainty on our measurement of x. Error Propagation Calculator These modified rules are presented here without proof. Error Propagation Chemistry Journal of the American Statistical Association. 55 (292): 708–713. In either case, the maximum size of the relative error will be (ΔA/A + ΔB/B). see here However, in complicated scenarios, they may differ because of: unsuspected covariances disturbances that affect the reported value and not the elementary measurements (usually a result of mis-specification of the model) mistakes Data Reduction and Error Analysis for the Physical Sciences. Retrieved 22 April 2016. ^ a b Goodman, Leo (1960). "On the Exact Variance of Products". Error Propagation Square Root Young, V. It is a calculus derived statistical calculation designed to combine uncertainties from multiple variables, in order to provide an accurate measurement of uncertainty. When mathematical operations are combined, the rules may be successively applied to each operation. this page In other classes, like chemistry, there are particular ways to calculate uncertainties. The fractional determinate error in Q is 0.028 - 0.0094 = 0.0186, which is 1.86%. Error Propagation Excel Note this is equivalent to the matrix expression for the linear case with J = A {\displaystyle \mathrm {J=A} } . It is important to note that this formula is based on the linear characteristics of the gradient of f {\displaystyle f} and therefore it is a good estimation for the standard ## Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. doi:10.1007/s00158-008-0234-7. ^ Hayya, Jack; Armstrong, Donald; Gressis, Nicolas (July 1975). "A Note on the Ratio of Two Normally Distributed Variables". Anytime a calculation requires more than one variable to solve, propagation of error is necessary to properly determine the uncertainty. Therefore the fractional error in the numerator is 1.0/36 = 0.028. Error Propagation Average To contrast this with a propagation of error approach, consider the simple example where we estimate the area of a rectangle from replicate measurements of length and width. However, in complicated scenarios, they may differ because of: unsuspected covariances errors in which reported value of a measurement is altered, rather than the measurements themselves (usually a result of mis-specification External links A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic Uncertainties and One drawback is that the error estimates made this way are still overconservative. http://bsdupdates.com/error-propagation/propagate-error-mean.php Raising to a power was a special case of multiplication. There is no error in n (counting is one of the few measurements we can do perfectly.) So the fractional error in the quotient is the same size as the fractional Guidance on when this is acceptable practice is given below: If the measurements of $$X$$, $$Z$$ are independent, the associated covariance term is zero. A similar procedure is used for the quotient of two quantities, R = A/B. For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation;[6] see Uncertainty Quantification#Methodologies for forward uncertainty propagation for details. The number "2" in the equation is not a measured quantity, so it is treated as error-free, or exact. Please see the following rule on how to use constants. Then σ f 2 ≈ b 2 σ a 2 + a 2 σ b 2 + 2 a b σ a b {\displaystyle \sigma _{f}^{2}\approx b^{2}\sigma _{a}^{2}+a^{2}\sigma _{b}^{2}+2ab\,\sigma _{ab}} or The coefficients may also have + or - signs, so the terms themselves may have + or - signs. So, rounding this uncertainty up to 1.8 cm/s, the final answer should be 37.9 + 1.8 cm/s.As expected, adding the uncertainty to the length of the track gave a larger uncertainty It is therefore likely for error terms to offset each other, reducing ΔR/R.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776034355163574, "perplexity": 1631.6933734533704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889660.55/warc/CC-MAIN-20180120142458-20180120162458-00594.warc.gz"}
http://math.stackexchange.com/questions/18226/setting-up-an-integral-to-find-a-cones-surface-area
# Setting Up an Integral to Find A Cone's Surface Area I tried proving the formula presented here by integrating the circumferences of cross-sections of a right circular cone: $$\int_{0}^{h}2\pi sdt, \qquad\qquad s = \frac{r}{h}t$$ so $$\int_{0}^{h}2\pi \frac{r}{h}tdt.$$ Integrating it got me $\pi h r$, which can't be right because $h$ isn't the slant height. So adding up the areas of differential-width circular strips doesn't add up to the lateral surface area of a cone? EDIT: I now realize that the integral works if I set the upper limit to the slant height - this works if I think of "unwrapping" the cone and forming a portion of a circle. The question still remains though: why won't the original integral work? Won't the value of the sum of the cylinders' areas reach the area of the cone as the number of partitions approaches infinity? - The problem is that you need to scale your surface area element appropriately, see en.wikipedia.org/wiki/Surface_integral, en.wikipedia.org/wiki/Surface_of_revolution and also en.wikipedia.org/wiki/Pappus%27s_centroid_theorem I hope you're aware of the fact that you need absolutely no calculus here: just observe that the cone can be built out of a circular sector of the plane. –  t.b. Jan 20 '11 at 3:07 You can't integrate circumferences to get a surface area for the same reason you can't integrate points to get a length. –  Qiaochu Yuan Jan 20 '11 at 3:31 Yuan: But the circumferences are multiplied by $dt$ –  G.P. Burdell Jan 20 '11 at 3:43 You seem to be ignoring the fact that s and r vary as the segment you consider varies. By using the same variable names it appears that you are confusing them to be constants... Anyway, for a derivation, look at the following figure: This is a cross-section of the cone. The area of the strip of width $\displaystyle dh$ that corresponds to $\displaystyle h$ (from the apex) is $\displaystyle 2\pi r \frac{dh}{\cos x}$ Now $\displaystyle r = h \tan x$ Thus $\displaystyle dA = 2 \pi h \frac{\tan x}{\cos x} dh$ Thus the total area $$= \int_{0}^{H} 2 \pi h \frac{\tan x}{\cos x} dh = \pi H^2 \frac{\tan x}{\cos x} = \pi (H \tan x) \left(\frac{H}{\cos x}\right) = \pi R S$$ Hope that helps. - Consider the following diagram, By similar triangles, $${H \over S} = {{dy} \over {ds}}$$ Solve for ds, $$ds = {{S(dy)} \over H}$$ The equation of line S, $$y = mx + b$$ where, $$m = - {H \over R}$$ and $$b = H$$ Therefore, $$y = - {H \over R}x + H$$ Next, solve for $x$ $$x = {{ - R(y - H)} \over H}$$ Next, integrate the circles of radii $x$ and thickness differential $ds$ from $y$ = zero to H, so the surface area of a right cone is given by, $$A = 2\pi \int\limits_0^H x ds$$ Substitute $x$ into the integral to get, $$A = 2\pi \int\limits_0^H {{{ - R(y - H)} \over H}} ds$$ Now substitute into the previous equation $$ds = {S \over H}dy$$ To obtain, $$A = 2\pi \int\limits_0^H {{{ - R(y - H)} \over H}} \left( {{S \over H}dy} \right)$$ Cleaning up this equation yields $$A = {{2\pi ( - R)S} \over {{H^2}}}\int\limits_0^H {(y - H)dy}$$ $$A = {{2\pi ( - R)S} \over {{H^2}}}\left[ {{{{y^2}} \over 2} - Hy} \right]_0^H$$ $$A = {{2\pi ( - R)S} \over {{H^2}}}\left[ {{{{H^2}} \over 2} - {H^2}} \right]$$ $$A = {{2\pi ( - R)S} \over {{H^2}}}\left[ {{{{H^2}} \over 2} - {{2{H^2}} \over 2}} \right]$$ $$A = {{2\pi ( - R)S} \over {{H^2}}}\left[ {{{ - {H^2}} \over 2}} \right]$$ $$A = {{2\pi RS} \over {{H^2}}}\left[ {{{{H^2}} \over 2}} \right] = \pi RS$$ - When ever you use calculus you need to start with the correct mathematical relationships. You further need to ensure that you integrate over the correct limits and over the correct variables. You can see integration as the summation of a lot of intesimal small items to give the final answer. All correctly setup integrals will result in the same answer. Before one start or rush into solving a problem, think what would be the easiest way to address the problem. I thought approaching it as a triangle would be quite easy to solve the problem. So this solution will use a triangle as the basic mathematical relationship. We all know the area of a triangle is: $$Area = \frac{1}{2}Base * Height$$ I will thus setup the problem as follows. (Solution 1) Define a triangle with a height of S (defined as the length from point to side of base) and a base of an infinite small length of the base and call it dbase. As we know that radians is defined in such a way that the circumference of a circle will yield exactly $$2.\pi$$ for a circle of unity (r=1). This relationship helps us to define dbase then as a plain ratio of an infinitely small angle multiplied by the radius. In this case the radius of the cone. Thus we can say: $$dbase = R. d angle$$ The area of my triangle would then become: $$darea = \frac{1}{2} S. R. dangle$$ We will integrate between the limits 0 and 2pi to include the total circumference. In degrees this would have been 0 to 360, however the relationship we use is defined for radians. (Note that I couldn't get the upper limit as $$2\pi$$ and thus integrate over half the circumference and multiply the result by 2. I am not familiar with the syntax. The answer however is the same. In some instances we use this as a trick to reduce the effort in calculating the result of integrals etc. ) $$Area = 2\int_0^ \pi \frac{1}{2} S. R. dangle$$ $$Area=2.(\frac{1}{2} S. R . \pi)-2.(\frac{1}{2} S. R . (0))$$ $$Area =\pi S R$$ And the result is exactly as the rest of the world believe the answer should be. Important is to be always mathematically correct. If you follow maths you can't go wrong. If you went wrong you made a mistake, search your assumptions and relationships and try gain. Try the volume yourself. I did this on my phone and it is somewhat difficult. Solution 2 - integrate from 0 to H, where H is height of the cone and R the radius. h =height, dheight is very small increase in height and r is the radius at h. Angle is the angle between the vertical and side length S. The mathematical relationship for the area is the circumference multiple by the small increase in the length of S. Note that the length is not dh but do because of the angle. Be very careful here. $$darea = 2\pi . r. ds$$ To setup the integral correct we need to express r and ds in terms of h to ensure consistency. The relationship is as follows: $$tan(angle) = r /h$$ $$r = h. tan(angle)$$ And $$cos(angle) = dh/ds$$ $$ds = dh. Cos(angle)$$ darea becomes then $$darea =2\pi.tan(angle). cos(angle) .h.dh$$ We just need to integrate this as: $$Area = 2\pi. tan(angle). cos(angle) \int_0^ H h. dh$$ $$Area = 2\pi. tan(angle).cos(angle) (( \frac{1}{2} H^2)-(\frac{1}{2} (0))$$ $$Area = \pi. tan(angle).cos(angle) H^2$$ We only need to get rid of the tan and cos and then we should have the same answer. From the above we can substitute back to get: $$Area = \pi. S. R$$ It worked again, no magic. Solution 3 - integrate from 0 to S. Same variables as in solution 2. Area definition same. This time we need to express r in terms of s. $$darea = 2\pi . r. ds$$ $$sin(angle) = r /s$$ $$r = s. sin(angle)$$ Replace r in darea, then integrate $$Area = 2\pi. sin(angle) \int_0^ S s. ds$$ $$Area = 2\pi. sin(angle) (( \frac{1}{2} S^2)-(\frac{1}{2} (0))$$ $$Area = \pi.sin(angle) S^2$$ Get rid of the sin term and can you believe, once again the same result as the rest of the world. $$Area = \pi. S. R$$ I love maths. - See the discussion to a previous question here which might help - I appreciate the thought process and did something similar as did by Mr G.P Burdell. To add to the confusion (I apologise!) let me put my point forward as this: Consider a Cone with height H and radius R. Let dh and dr be the respective changes in H and R. Therefore $$Volume = \int_0^H \pi{r^2} dh$$ Now “r” is a function of “h” so $$h = H- \frac{H}{R}r$$ $$dh = - \frac{H}{R}dr$$ Substituting the same in the above equation we get the integral as $$\int_R^0 \pi{r^2} \frac{(-H)}{R}dr$$ $$Volume = \frac{\pi}{3}R^2H$$ If you do the same process for the surface area you will end up getting $$Surface Area = \pi RH$$ Which is not true. The Surface Area of a Cone is $$= \pi RS$$ where S is Slant Height of the Cone. If we try to argue on this explanation of Surface Area then our explanation for Volume is in contradiction. - Ok, so I've been thinking about this for a few days, and I asked the same question on physicsforums.com. And luckily, someone posted an answer that explained the problem. Here is the question that I posted: http://www.physicsforums.com/showthread.php?p=4031752. If you scroll all the way to the second to last post, you will see a person who says, "The problem I think is that you wouldn't get the surface area if you used cylinders whose sides aren't parallel to the sides of the shape. For example, consider trying to "square" the perimeter of a circle. For illustration: http://qntm.org/trollpi "The sides of the square are cut into many pieces, and then "steps" are created from it, but you can always combine the steps back into the side of the square. Consider the fourth picture in the link, look at the upper half of the circle. The whole upper side of the square is there, it is just in pieces. So you can jag them all you want, the perimeter is the same and doesn't start approximating a circle. "In the cross section of the cone, considering it as a 2-d object, you can also try to calculate the perimeter of the slice. If you add up the sides of the rectangles, they will always add up to 2H, no matter how small you make them, however you can check using Pythagoras that it should be more. because its twice (for two sides of the slice) the square root of H^2+R^2" That answer made a lot of sense to me, and I hope that it makes sense to everyone else here :) - The diagram makes sense to me. The area around our slice clearly seems to me to be the circumference at that point multiplied by dS. Not dh, as we might be tempted to do just because dh is easily defined in terms of r. handily dS is just a constant*dr: dS=dr/cos(x) - Welcome to MSE! It helps readability to format questions/answers using MathJax (see FAQ). Regards –  Amzoti Apr 30 '13 at 17:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476758241653442, "perplexity": 337.71478286923616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00297-ip-10-180-206-219.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/maths-help_13
+0 # Maths help 0 198 7 +844 i need help as i dont get what method or topic is this  but i am meant to be able to  do this question please explain the answers or tell me the topics names Apr 17, 2019 #1 +7763 +2 This is trigonometry. 4(a) $$\text{As }\sin x = \cos\left(\dfrac{\pi}{2}-x\right),\\ \sin \left(\dfrac{\pi}{3}\right) = \cos\left(\dfrac{\pi}{2}-\dfrac{\pi}3\right)=\cos\left(\pi\left(\dfrac{1}{2}-\dfrac{1}{3}\right)\right)=\cos\left(\pi\left(\dfrac{1}{6}\right)\right) =\cos\left(\dfrac{\pi}{6}\right)\\ \sin \left(\dfrac{\pi}{3}\right) = \cos\left(\dfrac{\pi}{k}\right)\\ \cos\left(\dfrac{\pi}{6}\right) = \cos\left(\dfrac{\pi}{k}\right)\\ \text{Comparing the left hand side and the right hand side of the equation,}\\ k =6$$ . Apr 18, 2019 #4 +844 0 thank you, but can you please explain the first line and where you got that from, apart from that part it all makes sense YEEEEEET  Apr 18, 2019 #6 +7763 0 You can derive this from a right triangle. MaxWong  Apr 18, 2019 #2 +7763 +1 4(b) $$\cos\left(2x-\dfrac{5\pi}{6}\right) = \sin\left(\dfrac{\pi}{3}\right)\\ \cos\left(2x-\dfrac{5\pi}{6}\right) = \cos\left(\dfrac{\pi}{6}\right)\\ \text{As the cosine function is periodic with period }2\pi\text{, }\\ \cos\left(2x-\dfrac{5\pi}{6}\right) = \cos\left(\dfrac{\pi}{6}+2n\pi\right)\\ \text{As }\cos x = \cos\left(-x\right)\text{, }\\ \cos\left(2x-\dfrac{5\pi}{6}\right) = \cos\left(-\dfrac{\pi}{6}+2n\pi\right)\\ \text{Comparing the left hand side and the right hand side of the equation, }\\ 2x-\dfrac{5\pi}{6} = \dfrac{\pi}{6} + 2n\pi \text{ or } 2x-\dfrac{5\pi}{6} = -\dfrac{\pi}{6} + 2n\pi\\ 2x = \pi+2n\pi\text{ or }2x=\dfrac{2\pi}{3}+2n\pi\\ x = \dfrac{\pi}{2}+n\pi\text{ or }x=\dfrac{\pi}{3}+n\pi$$ . Apr 18, 2019 #5 +844 +1 could you explain how the 2n appeared please YEEEEEET  Apr 18, 2019 #7 +7763 +1 Actually, (n)(2pi), not (2n)(pi). After n periods, which is n(2pi), the value of the function is still the same. MaxWong  Apr 18, 2019 #3 +7763 +2 4(c) $$\text{As shown above in 4(b), } \cos\left(2x-\dfrac{5\pi}{6}\right) = \sin\left(\dfrac{\pi}{3}\right)\implies x = n\pi+\dfrac{\pi}{2}\text{ or }x=n\pi+\dfrac{\pi}{3}\\ \text{As tan} x \text{ has a period of }\pi,\\ \tan x = \tan\left(\dfrac{\pi}{2}\right)\text{ or }\tan x = \tan\left(\dfrac{\pi}{3}\right)\\ \text{But considering the fact that}\tan x \text{ has an asymptote at } x = \dfrac{\pi}{2},\text{ the only finite value is }\tan\left(\dfrac{\pi}{3}\right).\\ \text{The required value is }\tan\left(\dfrac{\pi}{3}\right) = \sqrt3$$ . Apr 18, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963923990726471, "perplexity": 4452.128311510543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00165.warc.gz"}
https://www.andlearning.org/arithmetic-progressions-formulas-for-class-10/
Connect with us ## Arithmetic Progressions Formulas for Class 10 Maths Chapter 5 Are you looking for Arithmetic Progression formulas for class 10 chapter 5? Today, we are going to share Arithmetic Progression formulas for class 10 chapter 5 according to student requirements. You are not a single student who is searching Arithmetic Progression formulas for class 10 chapter 5. According to me, thousands of students are searching for Arithmetic Progression formulas for class 10 chapter 5 per month. If you have any doubt or issue related to Arithmetic Progression formulas then you can easily connect with through social media for discussion. Arithmetic Progression formulas will very helpful to understand the concept and questions of the chapter Arithmetic Progression. I would like to suggest you remember Arithmetic Progression formulas for a whole life. It also helps you with higher studies. nth Term of an Arithmetic Progression $${a_n = a + (n – 1) \times d}$$ Sum of 1st n Terms of an Arithmetic Progression $${S_n = \frac{n}{2}\left[ {2a + \left( {n – 1} \right)d} \right]}$$ #### Summary of Arithmetic Progression formulas We have listed top important formulas for Arithmetic Progression for class 10 chapter 5 which helps support to solve questions related to the chapter Arithmetic Progression. i would like to say that after remembering the Arithmetic Progression formulas you can start the questions and answers solution of the Arithmetic Progression chapter. If you faced any problem to find a solution of Arithmetic Progression questions, please let me know through commenting or mail.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313090443611145, "perplexity": 486.73560308819737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00244.warc.gz"}
https://brilliant.org/practice/geometric-probability-level-2-3-challenges/?subtopic=probability-2&chapter=geometric-probability
Probability # Geometric Probability: Level 2 Challenges The dartboard above is made up of three concentric circles with radii $1, 3,$ and $5.$ Assuming that a dart thrown will land randomly on the dartboard, what is the probability that it lands in the green region? A point $(a,b)$ is randomly selected within the triangular region enclosed by the graphs of $3y + 2x = 6$, $x = 0$ and $y = 0$. What is the probability that $b > a$ ? Two people stand around a circular duck pond which has radius $R$ meters. They are placed randomly (uniformly along the circumference of the duck pond) and independently. What is the probability that they are more than $R$ meters apart from each other? Dave and Kathy both arrive at Pizza Palace at two random times between 10:00 p.m. and midnight. They agree to wait exactly 15 minutes for each other to arrive before leaving. What is the probability that Dave and Kathy see each other? If the probability is $\frac ab$ for coprime positive integers, give the answer as $a+b$. A real number $x$ is chosen randomly and uniformly from the interval $[2,10].$ What is the probability that the greatest integer less than or equal to $\frac{x}{2}$ is even? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8821961879730225, "perplexity": 230.0792438498262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00035.warc.gz"}
https://math.stackexchange.com/questions/2125723/algebraic-extension-of-bbb-q-with-exactly-one-extension-of-given-degree-n
# Algebraic extension of $\Bbb Q$ with exactly one extension of given degree $n$ Let $n \geq 2$ be any integer. Is there an algebraic extension $F_n$ of $\Bbb Q$ such that $F_n$ has exactly one field extension $K/F_n$ of degree $n$? Here I mean "exactly one" in a strict sense, i.e. I don't allow "up to (field / $F$-algebra) isomorphisms". But a solution with "exactly one up to field (or $F$-algebra) isomorphisms" would also be welcome. I'm very interested in the case where $n$ has two distinct prime factors. My thoughts: 1. This answer provides a construction for $n=2$. I was able to generalize it for $n=p^r$ where $p$ is an odd prime. Let $S = \left\{\zeta_{p^r}^j\sqrt[p^r]{2} \mid 0 \leq j < p^r \right\}$. Then $$\mathscr F_S = \left\{L/\Bbb Q \text{ algebraic extension} \mid \forall x \in S,\; x \not \in L \text{ and } \zeta_{p^r} \in L \right\} =\left\{L/\Bbb Q \text{ algebraic extension} \mid \sqrt[p^r]{2} \not \in L \text{ and } \zeta_{p^r} \in L \right\}$$ has a maximal element $F$, by Zorn's lemma. In particular, we have $$F \subsetneq K \text{ and } K/\Bbb Q \text{ algebraic extension} \implies \exists x \in S,\; x \in K \implies \exists x \in S,\; F \subsetneq F(x) \subseteq K$$ But $X^{p^r}-2$ is the minimal polynomial of any $x \in S$ over $F$ : it is irreducible over $F$ because $2$ is not a $p$-th power in $F$. Therefore $F(x)$ has degree $p^r$ over $F$ and using the implications above, we conclude that $F(x) = F(\sqrt[p^r]{2})$ is the only extension of degree $p^r$ of $F$, when $x \in S$. 1. Assume now that we want to build a field $F$ with the desired property for some $n=\prod_{i=1}^r p_i^{n_i}$. I tried to do some kind of compositum, without any success. I have some trouble with the irreducibility over $F$ of the minimal polynomial of some $x \in S$ ($S$ suitably chosen) over $\Bbb Q$... 2. I know that $\mathbf C((t))$ is quasi-finite and embeds abstractly in $\bf C$, so there is an uncountable subfield of $\bf C$ having exactly one field extension of degree $n$ for any $n \geq 1$. • What about the totally real subfield of $\overline {\mathbb Q}$? That has a unique extension of degree $2$. – lulu Feb 2 '17 at 13:41 • @lulu : thank you. But $\Bbb R \cap \overline{\Bbb Q}$ has no extension of degree $n \geq 3$ (it is a real closed field), I think. – Watson Feb 2 '17 at 13:57 • You fixed $n$ and then asked for an $F$ with a unique extension of degree $n$. Did you mean $F$ should have a unique extension of degree $n$ for all $n$? – lulu Feb 2 '17 at 14:01 • @lulu : no, I don't necessarily want that (but of course, this would solve the problem – i.e. finding a quasi-finite field $F \subset \overline{\Bbb Q}$). But I want for every $n \geq 2$ to find some algebraic extension $F=F_n$... Does this make sense? – Watson Feb 2 '17 at 14:06 • Oh, ok. Then my simple construction won't work. Artin (?) showed that the only elements of finite order in $Gal(\overline {\mathbb Q}/\mathbb Q)$ are the identity and complex conjugation (up to conjugation). I think that means that the example I gave is the only one of its kind (but I haven't thought that through carefully). – lulu Feb 2 '17 at 14:16 If you choose a random $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and consider the fixed field $K\subset \bar{\mathbb{Q}}$ of $\sigma$, then $\bar{\mathbb{Q}}/K$ will be Galois. The Galois group $G$ will almost always be $\hat{\mathbb{Z}}$, the profinite completion of the integers, and this is a group that has exactly one finite index subgroup of each index. Then $K$ will solve your problem for each $n$. There will be infinitely many such fields. • (Just for clarity, I denoted by $F = F_n$ what you wrote $K$ in your answer). – Watson Aug 17 '18 at 14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365705847740173, "perplexity": 138.1445424822448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00024.warc.gz"}
https://cmc.deusto.eus/control-state-constraints/
# Minimal controllability time for the heat equation under state constraints Thursday, February 23rd, 2017 10:00h., Central Meeting Room at DeustoTech #### Jérôme Lohéac Institut de Recherche en Communications et Cybernétique de Nantes(France) Abstract: The free heat equation is well known to preserve the non-negativity of solutions. On the other hand, due to the infinite velocity of propagation, the heat equation is null-controllable in an arbitrary small time. The following question then arises naturally: Can the heat dynamics be controlled under a positivity constraint on the state, requiring that the state remains non-negative all along the time dependent trajectory? I will show that, if the control time is large enough, constrained controllability holds. I will also show that it fails to be true if the control time is too short. In other words, despite of the infinite velocity of propagation, under the natural positivity constraint on the state, controllability fails when the time horizon is too short. See the slides here
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460498690605164, "perplexity": 698.4218321400429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00096.warc.gz"}
http://mathhelpforum.com/advanced-statistics/44065-variance-problem.html
# Math Help - Variance problem 1. ## Variance problem I am working my way through Feller (3rd ed, vol. 1), and I got stuck on the following problem (IX.9 problem 19a): A man with n keys wants to open his door and tries the keys independently and at random. Find the mean and variance of the number of trials if unsuccessful keys are not eliminated from further selection. (Assume that only one key fits the door.) Computing the mean presented no difficultly. I've worked out the variance to be $\sum_{k=1}^{\infty}{k^2 {{(n-1)^{k-1}}\over{n^k}}}-n^2$. However, I am not sure how to compute the sum, and would appreciate any hints. 2. Originally Posted by aix I am working my way through Feller (3rd ed, vol. 1), and I got stuck on the following problem (IX.9 problem 19a): A man with n keys wants to open his door and tries the keys independently and at random. Find the mean and variance of the number of trials if unsuccessful keys are not eliminated from further selection. (Assume that only one key fits the door.) Computing the mean presented no difficultly. I've worked out the variance to be $\sum_{k=1}^{\infty}{k^2 {{(n-1)^{k-1}}\over{n^k}}}-n^2$. However, I am not sure how to compute the sum, and would appreciate any hints. Let X be the random variable number of trials before getting the correct key. X follows a geometric distribution with p = 1/n. You'll find the derivations of mean and variance here: http://www.win.tue.nl/~rnunez/2DI30/...tributions.pdf But personally I think it's easier to calculate and then use the moment generating function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132471084594727, "perplexity": 425.10462750005325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824345.69/warc/CC-MAIN-20160723071024-00174-ip-10-185-27-174.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/134988/how-can-i-write-a-gaussian-state-as-a-squeezed-displaced-thermal-state
# How can I write a Gaussian state as a squeezed, displaced thermal state? I would like to write a Gaussian state with density matrix $\rho$ (single mode) as a squeezed, displaced thermal state: \begin{gather} \rho = \hat{S}(\zeta) \hat{D}(\alpha) \rho_{\bar{n}} \hat{D}^\dagger(\alpha) \hat{S}^\dagger(\zeta) . \end{gather} Here, \begin{gather} \rho_{\bar{n}} = \int_{\mathbb{C}} P_{\bar{n}}(\alpha) |\alpha\rangle\langle\alpha|d\alpha \text{ with } P_{\bar{n}}(\alpha) = \frac{1}{\pi \bar{n}} e^{- |\alpha|^2 / \bar{n}} \end{gather} is a thermal state with occupation $\bar{n}$, \begin{gather} \hat{S}(\zeta) = e^{(\zeta^* \hat{a}^2 + \zeta \hat{a}^{\dagger 2}) / 2} \end{gather} is the squeezing operator, and \begin{gather} \hat{D}(\alpha) = e^{\alpha^* \hat{a} - \alpha \hat{a}^\dagger} . \end{gather} is the displacement operator. I prefer to use the convention $\hat{x} = (\hat{a} + \hat{a}^\dagger) / \sqrt{2}$ and $\hat{p} = (\hat{a} - \hat{a}^\dagger) / \sqrt{2} i$. I assume that the way to accomplish this is to derive the mean and variance of our Gaussian state $\rho$ and thereby determine $\zeta$ and $\alpha$. However, I have been unsuccessful in doing so. That is, given the mean and variance of our Gaussian state $\rho$, what are $\zeta$ and $\alpha$? On a side note, I was also wondering if there is a standard result for the commutator of $\hat{S}(\zeta)$ and $\hat{D}(\alpha)$? • There is a result (without demonstration) in formula $(11)$ (and following lines) of this paper. A pseudo-commutation relation for $D$ and $S$ is given in formula $(15)$ of this paper Sep 11, 2014 at 10:36 • Probably you could demonstrate the result thanks to the action of $D$ and $S$ on $a, a^+$ (see pages $15$ and $28$ of this presentation), and the expression of the thermal density matrix in the Fock basis (see $(3.87)$ in this ref) Sep 11, 2014 at 10:57 • @Trimok Without your reference (which does not really answer the problem) I was wondering why this result should even be true ? Is there a way to map the Gaussian to any other states or what ? Sep 17, 2014 at 5:29 • @FraSchelle : If you inverse the order of $S$ and $D$ (relatively to the OP), the result in my first ref is correct (I have checked the mean, but the variance should be correct too), and it gives a formula between mean, variance, $\zeta$ , $\alpha$, and $\bar n$ (which was the OP question). Sep 17, 2014 at 9:24 • @FraSchelle : Now, in my last ref, a "gaussian" state is diagonal in the coherent basis ($3.86$), and is also diagonal in the Fock basis ($3.87$), but it does not appear "gaussian" in the Fock basis. Sep 17, 2014 at 9:24 I will follow the notes by A. Ferraro et al. A state $$\rho$$ of a system with $$n$$ degrees of freedom is said to be Gaussian if its Wigner function can be written as $$W[\rho](\boldsymbol{\alpha}) = \frac{\exp\left( -\frac{1}{2}(\boldsymbol{\alpha} - \bar{\boldsymbol{\alpha}})^T \boldsymbol{\sigma}^{-1}_\alpha (\boldsymbol{\alpha} - \bar{\boldsymbol{\alpha}}) \right) }{(2\pi)^n\sqrt{\text{Det}[\boldsymbol{\sigma}_\alpha ]}},$$ where $$\boldsymbol{\alpha}$$ and $$\bar{\boldsymbol{\alpha}}$$ are vectors containing all the $$2n$$ quadratures of the system and their average values, respectively, and $$\boldsymbol{\alpha}$$ is the covariance matrix, whose elements are defined as $$[\boldsymbol{\sigma}]_{kl} := \frac{1}{2} \langle \{R_k,R_l \} \rangle - \langle R_k\rangle \langle R_l\rangle,$$ where $$\{\cdot,\cdot\}$$ is the anticommutator, and $$R_k$$ is the $$k-th$$ element of the vector $$\boldsymbol{R}= (q_1,p_1,\ldots,q_n,p_n)^T$$ with the $$q$$s and $$p$$s being the position and momentum-like operators. A very important result is that it turns out that Gaussian states can be fully characterised by their covariace matrix plus the vector of expectation values of the quadratures, $$\bar{\boldsymbol{\alpha}}$$. If your system only has one mode (one boson), then you only need a symmetric $$2\times2$$ matrix and two real numbers ($$q$$ and $$p$$) to describe it! This means a total of five parameters. As you point out, we can write any Gaussian state as $$\rho = D(\bar{\alpha})S(\xi)\rho_{th}S^\dagger(\xi)D^\dagger(\bar{\alpha})$$ where here $$\bar{\alpha}\equiv\frac{1}{\sqrt{2}}(\bar{x}+i\bar{p})$$ and $$\xi = r e^{i\varphi}$$. If your thermal state has mean photon number $$N$$, then it suffices to know $$N$$, $$r$$ and $$\varphi$$ to compute the covariance matrix. Its elements are given by $$\sigma_{11} =\frac{2N+1}{2} \left(\cosh (2r)+\sinh (2r)\cos (\varphi)\right)$$ $$\sigma_{22} =\frac{2N+1}{2}\left( \cosh(2r)-\sinh(2r)\cos(\varphi) \right)$$ $$\sigma_{12} =\sigma_{21}=-\frac{2N+1}{2}\sinh(2r)\sin(\varphi).$$ You can see that the main properties of the state are captured by the covariance matrix, because the displacement $$\bar{\alpha}$$ can always be disregarded by local operations (it is a phase-space translation). In other words, you can always put it to zero. Answering your question, note that a Gaussian state is not simply a Gaussian distribution. You need more parameters than simply the variance and the mean (as you would to define a classical Gaussian probability distribution). These are in general five real values, but the essential ones are the ones entering the covariance matrix, as explained before. As for the commutator, I am not aware of any closed formula. But I do know that displacing and then squeezing produces a state which has the same squeezing as the zqueezed-displaced state, but a different displacement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549267888069153, "perplexity": 331.28042962515315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00765.warc.gz"}
http://rcd.ics.org.ru/editorial_board/detail/987-anastasios_bountis
0 2013 Impact Factor Anastasios Bountis Patras, 26500 Greece University of Patras Publications: Anastassiou  S., Bountis A., Bäcker A. Recent Results on the Dynamics of Higher-dimensional Hénon Maps 2018, vol. 23, no. 2, pp.  161-177 Abstract We investigate different aspects of chaotic dynamics in Hénon maps of dimension higher than 2. First, we review recent results on the existence of homoclinic points in 2-d and 4-d such maps, by demonstrating how they can be located with great accuracy using the parametrization method. Then we turn our attention to perturbations of Hénon maps by an angle variable that are defined on the solid torus, and prove the existence of uniformly hyperbolic solenoid attractors for an open set of parameters.We thus argue that higher-dimensional Hénon maps exhibit a rich variety of chaotic behavior that deserves to be further studied in a systematic way. Keywords: invariant manifolds, parametrization method, solenoid attractor, hyperbolic sets Citation: Anastassiou  S., Bountis A., Bäcker A.,  Recent Results on the Dynamics of Higher-dimensional Hénon Maps, Regular and Chaotic Dynamics, 2018, vol. 23, no. 2, pp. 161-177 DOI:10.1134/S156035471802003X Efthymiopoulos C., Bountis A., Manos T. Explicit construction of first integrals with quasi-monomial terms from the Painlevé series 2004, vol. 9, no. 3, pp.  385-398 Abstract The Painlevé and weak Painlevé conjectures have been used widely to identify new integrable nonlinear dynamical systems. For a system which passes the Painlevé test, the calculation of the integrals relies on a variety of methods which are independent from Painlevé analysis. The present paper proposes an explicit algorithm to build first integrals of a dynamical system, expressed as "quasi-polynomial" functions, from the information provided solely by the Painlevé–Laurent series solutions of a system of ODEs. Restrictions on the number and form of quasi-monomial terms appearing in a quasi-polynomial integral are obtained by an application of a theorem by Yoshida (1983). The integrals are obtained by a proper balancing of the coefficients in a quasi-polynomial function selected as initial ansatz for the integral, so that all dependence on powers of the time $\tau = t – t_0$ is eliminated. Both right and left Painlevé series are useful in the method. Alternatively, the method can be used to show the non-existence of a quasi-polynomial first integral. Examples from specific dynamical systems are given. Citation: Efthymiopoulos C., Bountis A., Manos T.,  Explicit construction of first integrals with quasi-monomial terms from the Painlevé series, Regular and Chaotic Dynamics, 2004, vol. 9, no. 3, pp. 385-398 DOI:10.1070/RD2004v009n03ABEH000286 Marinakis V., Bountis A., Abenda S. Finitely and Infinitely Sheeted Solutions in Some Classes of Nonlinear ODEs 1998, vol. 3, no. 4, pp.  63-73 Abstract In this paper we examine an integrable and a non-integrable class of the first order nonlinear ordinary differential equations of the type $\dot{x}=x - x^n + \varepsilon g(t)$, $x \in \mathbb{C}$, $n \in \mathbb{N}$. We exploit, using the analysis proposed in [1], the asymptotic formulas which give the location of the singularities in the complex plane and show that there is an essential difference regarding the formation and the density of the singularities between the cases $g(t)=1$ and $g(t)=t$. Our analytical results are combined with a numerical study of the solutions in the complex time plane. Citation: Marinakis V., Bountis A., Abenda S.,  Finitely and Infinitely Sheeted Solutions in Some Classes of Nonlinear ODEs, Regular and Chaotic Dynamics, 1998, vol. 3, no. 4, pp. 63-73 DOI:10.1070/RD1998v003n04ABEH000093 Rothos V. M., Bountis A. The Second Order Mel'nikov Vector 1997, vol. 2, no. 1, pp.  26-35 Abstract Mel'nikov's perturbation method for showing the existence of transversal intersections between invariant manifolds of saddle fixed points of dynamical systems is extended here to second order in a small parameter $\epsilon$. More specifically, we follow an approach due to Wiggins and derive a formula for the second order Mel'nikov vector of a class of periodically perturbed $n$-degree of freedom Hamiltonian systems. Based on the simple zero of this vector, we prove an $O(\epsilon^2)$ sufficient condition for the existence of isolated homoclinic (or heteroclinic) orbits, in the case that the first order Mel'nikov vector vanishes identically. Our result is applied to a damped, periodically driven $1$-degree-of-freedom Hamiltonian and good agreement is obtained between theory and experiment, concerning the threshold of heteroclinic tangency. Citation: Rothos V. M., Bountis A.,  The Second Order Mel'nikov Vector, Regular and Chaotic Dynamics, 1997, vol. 2, no. 1, pp. 26-35 DOI:10.1070/RD1997v002n01ABEH000023
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700106739997864, "perplexity": 559.2564596778878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00224.warc.gz"}
https://www.physicsforums.com/threads/basic-lineal-motion-formulas.636561/
# Basic lineal motion formulas 1. Sep 17, 2012 ### lozzajp One thing I enjoy as a new physics student is using formulas. I have been playing with the fundamental formulas in terms of units rather than their names. m/s/s=(m/s)/s or v=at m/s=(m/s/s) x s m/s=m/s Simple maths yea, but when im getting to the bigger formulas im having trouble, perhaps they just dont work the way i am thinking but if someone could enlighten me that would be great.. S=(1/2)at^2 m=(1/2)x(m/s^2)x(s^2) m=(1/2)x(m) why do i get m = half m? v^2 = 2as m^2/s^2 = 2(m/s/s) x m m^2/s^2 = 2 x m^2/s^2 m/s = 2 x m/s again, there is a x2 factor there.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095690250396729, "perplexity": 2160.535639355068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594675.66/warc/CC-MAIN-20180722233159-20180723013159-00463.warc.gz"}
http://quant.stackexchange.com/questions/10882/calculate-interest-rate-swap-curve-from-eurodollar-futures-price
Calculate interest rate swap curve from Eurodollar futures price So I was reading Robert McDonald's "Derivatives Markets" and it says Eurodollar futures price can be used to obtain a strip of forward interest rates. We can then use this to obtain the implied forward LIBOR term structure and build the interest rate swap curve. The book also provides a concrete example to illustrate its point but somehow I cannot seem to understand it. There was no assumption about the terms of the swap, so I was kinda confused. Is it correct to say that the swap rates calculated in the table are based on an interest rate swap that starts in June with quarterly payments? Moreover, can this method be extended to determine the swap rate on a deferred interest rate swap, i.e., one that instead starts on Sep/Dec with payments being made semi-annually/annually? Thank you! -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990299701690674, "perplexity": 706.7610618994066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131292683.3/warc/CC-MAIN-20150323172132-00123-ip-10-168-14-71.ec2.internal.warc.gz"}
https://pure.ewha.ac.kr/en/publications/a-gravitational-wave-standard-siren-measurement-of-the-hubble-con
# A gravitational-wave standard siren measurement of the Hubble constant LIGO Scientific Collaboration, Virgo Collaboration, 1M2H Collaboration, Dark Energy Camera GW-EM Collaboration, DES Collaboration, DLT40 Collaboration, Las Cumbres Observatory Collaboration, VINRO UGE Collaboration, MASTER Collaboration Research output: Contribution to journalArticlepeer-review 567 Scopus citations ## Abstract On 17 August 2017, the Advanced LIGO1 and Virgo2 detectors observed the gravitational-wave event GW170817-a strong signal from the merger of a binary neutron-star system3. Less than two seconds after the merger, a γ-ray burst (GRB 170817A) was detected within a region of the sky consistent with the LIGO-Virgo-derived location of the gravitational-wave source4-6. This sky region was subsequently observed by optical astronomy facilities7, resulting in the identification8-13 of an optical transient signal within about ten arcseconds of the galaxy NGC 4993. This detection of GW170817 in both gravitational waves and electromagnetic waves represents the first 'multi-messenger' astronomical observation. Such observations enable GW170817 to be used as a 'standard siren'14-18 (meaning that the absolute distance to the source can be determined directly from the gravitational-wave measurements) to measure the Hubble constant. This quantity represents the local expansion rate of the Universe, sets the overall scale of the Universe and is of fundamental importance to cosmology. Here we report a measurement of the Hubble constant that combines the distance to the source inferred purely from the gravitational-wave signal with the recession velocity inferred from measurements of the redshift using the electromagnetic data. In contrast to previous measurements, ours does not require the use of a cosmic 'distance ladder'19: the gravitational-wave analysis can be used to estimate the luminosity distance out to cosmological scales directly, without the use of intermediate astronomical distance measurements. We determine the Hubble constant to be about 70 kilometres per second per megaparsec. This value is consistent with existing measurements20,21, while being completely independent of them. Additional standard siren measurements from future gravitationalwave sources will enable the Hubble constant to be constrained to high precision. Original language English 85-98 14 Nature 551 7678 https://doi.org/10.1038/nature24471 Published - 2 Nov 2017 ## Fingerprint Dive into the research topics of 'A gravitational-wave standard siren measurement of the Hubble constant'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933076024055481, "perplexity": 2172.4678668825436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00697.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-5-linear-functions-5-8-graphing-absolute-value-functions-standardized-test-prep-page-346/52
## Algebra 1 $f(-3) = 27$ $f(x) = x^2-4x+6$, let $x=-3$ $f(-3) = (-3)^2-(4*-3)+6$ $f(-3) = 9--12+6$ $f(-3) = 9+12+6$ $f(-3) = 27$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999998927116394, "perplexity": 869.3438903757996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00322.warc.gz"}
https://stats.stackexchange.com/questions/117105/extreme-learning-machine-whats-it-all-about/137312
# Extreme learning machine: what's it all about? I've been thinking about, implementing and using the Extreme Learning Machine (ELM) paradigm for more than a year now, and the longer I do, the more I doubt that it is really a good thing. My opinion, however, seems to be in contrast with scientific community where -- when using citations and new publications as a measure -- it seems to be a hot topic. The ELM has been introduced by Huang et. al. around 2003. The underlying idea is rather simple: start with a 2-layer artificial neural network and randomly assign the coefficients in the first layer. This, one transforms the non-linear optimization problem which is usually handled via backpropagation into a simple linear regression problem. More detailed, for $\mathbf x \in \mathbb R^D$, the model is $$f(\mathbf x) = \sum_{i=1}^{N_\text{hidden}} w_i \, \sigma\left(v_{i0} + \sum_{k=1}^{D} v_{ik} x_k \right)\,.$$ Now, only the $w_i$ are adjusted (in order to minimize squared-error-loss), whereas the $v_{ik}$'s are all chosen randomly. As a compensation for the loss in degrees-of-freedom, the usual suggestion is to use a rather large number of hidden nodes (i.e. free parameters $w_i$). From another perspective (not the one usually promoted in the literature, which comes from the neural network side), the whole procedure is simply linear regression, but one where you choose your basis functions $\phi$ randomly, for example $$\phi_i(\mathbf x) = \sigma\left(v_{i0} + \sum_{k=1}^{D} v_{ik} x_k \right)\,.$$ (Many other choices beside the sigmoid are possible for the random functions. For instance, the same principle has also been applied using radial basis functions.) From this viewpoint, the whole method becomes almost too simplistic, and this is also the point where I start to doubt that the method is really a good one (... whereas its scientific marketing certainly is). So, here are my questions: • The idea to raster the input space using random basis functions is, in my opinion, good for low dimensions. In high dimensions, I think it is just not possible to find a good choice using random selection with a reasonable number of basisfunctions. Therefore, does the ELM degrade in high-dimensions (due to the curse of dimensionality)? • Do you know of experimental results supporting/contradicting this opinion? In the linked paper there is only one 27-dimensional regression data set (PYRIM) where the method performs similar to SVMs (whereas I would rather like to see a comparison to a backpropagation ANN) Your intuition about the use of ELM for high dimensional problems is correct, I have some results on this, which I am preparing for publication. For many practical problems, the data are not very non-linear and the ELM does fairly well, but there will always be datasets where the curse of dimensionality means that the chance of finding a good basis function with curvature just where you need it becomes rather small, even with many basis vectors. I personally would use something like a least-squares support vector machine (or a radial basis function network) and try and choose the basis vectors from those in the training set in a greedy manner (see e.g. my paper, but there were other/better approaches that were published at around the same time, e.g. in the very good book by Scholkopf and Smola on "Learning with Kernels"). I think it is better to compute an approximate solution to the exact problem, rather than an exact solution to an approximate problem, and kernel machines have a better theoretical underpinning (for a fixed kernel ;o). • +1. I have never heard about ELM before, but from the description in the OP it sounds a bit like liquid state machine (LSM): random network connectivity and optimizing only the readout weights. However, in LSM the random "reservoir" is recurrent, whereas in ELM it is feedforward. Is that indeed the similarity and the difference? – amoeba Feb 11 '15 at 22:13 • Thank you for the good answer, please update the answer when your paper has been published. Regarding kernel: of course you there's also a "kernel" version of ELM. Just replace the sigmoid above by some (not-necessarily positive-definite) kernel $k(\mathbf x,\mathbf x_i)$ and pick a lot of $\mathbf x_i$'s randomly. Same "trick" here as in the original ELM, same problem. Those methods you mentioned for choosing the centers are of direct importance here as well (even if the goal function in ELM and SVM are different) ... this probably turns it from a "totally blind" to a "half blind" method. – davidhigh Feb 14 '15 at 1:54 • @amoeba: I didn't knew the liquid state machine, but from what you say it sounds indeed very similar ... and of course, techinically more general. Still, the recurrency just adds a more complex form of randomness to the problem, which in my opinion doesn't cure the the curse-of-dimensionality problems (... but ok, who does this?). Are those recurrency weights chosen with some care or also completely random? – davidhigh Feb 14 '15 at 2:04 • @davidhigh for an RBF kernel, the "representer theorems" show that there is no better solution than to centre a basis function on each training sample (making some reasonable assumptions about the regularised cost function). This is one of the nice features of kernel methods (and splines), so there is no need to spread them randomly. By the way, constructing a linear model on the output of randomly selected basis functions has a very long history, my favourite is the single layer look up perceptron ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=51949&tag=1 but I might be biased! – Dikran Marsupial Feb 14 '15 at 13:03 • @DikranMarsupial did you publish or do you have anything pre-publication available? – Tom Hale Nov 2 '17 at 5:05 The ELM "learns" from the data by analytically solving for the output weights. Thus the larger the data that is fed into the network will produce better results. However this also requires more numbers of hidden nodes. If the ELM is trained with little or no error, when given a new set of input, it is unable to produce the correct output. The main advantage of ELM over traditional neural net such a back propagation is its fast training time. Most of the computation time is spent on solving the output layer weight as mentioned in Huang paper.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284510970115662, "perplexity": 597.5285014119653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00043.warc.gz"}
https://helda.helsinki.fi/browse?value=numerical%20simulations&type=subject
# Browsing by Subject "numerical simulations" Sort by: Order: Results: Now showing items 1-3 of 3 • (Helsingin yliopisto, 2021) Phase transitions in the early Universe and in condensed matter physics are active fields of research. During these transitions, objects such as topological solitons and defects are produced by the breaking of symmetry. Studying such objects more thoroughly could shed light on some of the modern problems in cosmology such as baryogenesis and explain many aspects in materials research. One example of such topological solitons are the (1+1) dimensional kinks and their respective higher dimensional domain walls. The dynamics of kink collisions are complicated and very sensitive to initial conditions. Making accurate predictions within such a system has proven to be difficult, and research has been conducted since the 70s. Especially difficult is predicting the location of resonance windows and giving a proper theoretical explanation for such a structure. Deeper understanding of these objects is interesting in its own right but can also bring insight in predicting their possibly generated cosmological signatures. In this thesis we have summarized the common field theoretic tools and methods for the analytic treatment of kinks. Homotopy theory and its applications are also covered in the context of classifying topological solitons and defects. We present our numerical simulation scheme and results on kink-antikink and kink-impurity collisions in the $\phi^4$ model. Kink-antikink pair production from a wobbling kink is also studied, in which case we found that the separation velocity of the produced kink-antikink pair is directly correlated with the excitation amplitude of the wobbling kink. Direct annihilation of the produced pair was also observed. We modify the $\phi^4$ model by adding a small linear term $\delta \phi^3$, which modifies the kinks into accelerating bubble walls. The collision dynamics and pair production of these objects are explored with the same simulation methods. We observe multiple new effects in kink-antikink collisions, such as potentially perpetual bouncing and faster bion formation in comparison to the $\phi^4$ model. We also showed that the $\delta$ term defines the preferred vacuum by inevitably annihilating any kink-antikink pair. During pair production we noticed a momentum transfer between the produced bion and the original kink and that direct annihilation seems unlikely in such processes. For wobbling kink - impurity collisions we found an asymmetric spectral wall. Future research prospects and potential expansions for our analysis are also discussed. • (2020) Particle precipitation is a central aspect of space weather, as it strongly couples the magnetosphere and the ionosphere and can be responsible for radio signal disruption at high latitudes. We present the first hybrid-Vlasov simulations of proton precipitation in the polar cusps. We use two runs from the Vlasiator model to compare cusp proton precipitation fluxes during southward and northward interplanetary magnetic field (IMF) driving. The simulations reproduce well-known features of cusp precipitation, such as a reverse dispersion of precipitating proton energies, with proton energies increasing with increasing geomagnetic latitude under northward IMF driving, and a nonreversed dispersion under southward IMF driving. The cusp is also found more polewards in the northward IMF simulation than in the southward IMF simulation. In addition, we find that the bursty precipitation during southward IMF driving is associated with the transit of flux transfer events in the vicinity of the cusp. In the northward IMF simulation, dual lobe reconnection takes place. As a consequence, in addition to the high-latitude precipitation spot associated with the lobe reconnection from the same hemisphere, we observe lower-latitude precipitating protons which originate from the opposite hemisphere's lobe reconnection site. The proton velocity distribution functions along the newly closed dayside magnetic field lines exhibit multiple proton beams travelling parallel and antiparallel to the magnetic field direction, which is consistent with previously reported observations with the Cluster spacecraft. In both runs, clear electromagnetic ion cyclotron waves are generated in the cusps and might further increase the calculated precipitating fluxes by scattering protons to the loss cone in the low-altitude cusp. Global kinetic simulations can improve the understanding of space weather by providing a detailed physical description of the entire near-Earth space and its internal couplings. • (Finnish Meteorological Institute, 2017) Finnish Meteorological Institute Contributions 132 This thesis investigates interactions between solar wind and the magnetosphere of the Earth using two global magnetosphericsimulation models, GUMICS-4 and Vlasiator, which are both developed in Finland. The main topic of the thesis is magnetic reconnection at the dayside magnetopause, its drivers and global effects. Magnetosheath mirror mode waves and their evolution, identification and impacts on the local reconnection rates at the magnetopause are also discussed. This thesis consists of four peer-reviewed papers and an introductory part. GUMICS-4 is a magnetohydrodynamic model solving plasma as a single magnetized fluid. Vlasiator is the world’s first global magnetospheric hybrid-Vlasov simulation model, which solves the motion of ions by describing them as velocity distribution functions, whereas electrons are described as a charge neutralizing fluid. Vlasiator is able to solve ion scale physics in a global scale simulation. However, it is computationally heavy and the global simulations are currently describing Earth’s magnetosphere only in two spatial dimensions, whereas the velocity space is three dimensional. This thesis shows that magnetic reconnection at the dayside magnetopause is controlled by several factors. The impact of dipole tilt angle and sunward component of the interplanetary magnetic field on magnetopause reconnection is investigated with a set of GUMICS-4 simulations. Using Vlasiator simulations, this thesis shows that local reconnection rate is highly variable even during steady solar wind and correlates well with an analytical model for 2D asymmetric reconnection. It is also shown that the local reconnection rate is affected by local variations in the magnetosheath plasma. Fluctuations in the magnetosheath parameters near X-lines are partly generated by mirror mode waves that are observed to grow in the quasi-perpendicular magnetosheath. These results show that that the local reconnection rate at the X-lines is affected not only by the fluctuations in the inflow parameters but also by reconnection at nearby X-lines. Outflow from stronger X-lines pushes against the weaker ones and might ultimately suppress reconnection in the weaker X-lines. Magnetic islands, 2D representations of FTEs, form between X-lines in the Vlasiator simulations. FTEs propagate along the dayside magnetopause driving bow waves in the magnetosheath. The bow waves propagate upstream all the way to the bow shock causing bulges in the shock, from which solar wind particles can reflect back to the solar wind causing local foreshocks. The overall conclusion of this thesis is that the ion scale kinetic physics is important to accurately model the solar wind – magnetosheath – magnetopause interactions. Vlasiator results show a strong scale-coupling between ion and global scales: global scale phenomena have an impact on the local physics and the local phenomena may have unexpected impacts on the global dynamics of the magnetosphere. Neglecting the global scales in local ion scale simulations and vice versa may therefore lead to incomplete description of the solar wind – magnetosphere interactions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723311424255371, "perplexity": 1882.2718163855961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00184.warc.gz"}
https://www.bbc.co.uk/bitesize/guides/z2bvfcw/revision/1
Charge and static All matter has , in the same way that all matter has mass. have no overall charge – they are neutral. This is because atoms contain equal numbers of protons and electrons. carry a negative and carry a positive electric charge. Static Electrons can be made to move from one object to another. However, protons do not move because they are tightly bound in the of atoms. For example, when a plastic rod is rubbed with a duster, electrons are from one material to the other. The material that gains electrons becomes negatively charged. The material that loses electrons becomes positively charged. The duster picks up electrons from the rod. This leaves the rod with a positive overall charge and the duster with a negative overall charge. Static charge occurs when electrons build up on an object. Static charge: • can only build up on objects which are , eg plastic or wood • cannot build up on objects that act as , eg metals Conductors allow the electrons to flow away, forming an . When a static charge on an object is discharged, an electric current flows through the air. This can cause sparks. Lightning is an example of a large amount of static charge being discharged.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80019211769104, "perplexity": 533.8938883406107}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00233.warc.gz"}
http://mathhelpforum.com/calculus/196009-notion-limit-print.html
# Notion of limit • March 15th 2012, 12:26 PM DarkFalz Notion of limit Hello, for many years i have been doing derivatives, and its definition: lim f(x+h)-f(x) / h always seemed quite obvious to me, but then i started to think about it... h->0 when we do derivatives through its definition, we usually arrive to something like (h(x+h))/h and we remove the h on the numerator and denominator, leaving x+h, when we replace h by 0 and get to x....but the question is, wont we be doing a violation when we place h by 0? If we condsidered the previous formula, replacing 0 by 0 would cause a 0/0 indetermination, which arises from the fact that the removal rule only applices if h!=0, so....what is that last result we get? The result of ignoring the fact that h cannot be 0 but at the same time assume it is 0? • March 15th 2012, 12:38 PM Kanwar245 Re: Notion of limit Since derivatives are continuous already, we can put h = 0. :) • March 15th 2012, 12:39 PM DarkFalz Re: Notion of limit Then why did we use a rule that considered that h wont be zero? • March 15th 2012, 01:16 PM Plato Re: Notion of limit Quote: Originally Posted by Kanwar245 Since derivatives are continuous already, we can put h = 0. :) What make you say that. That is false. If a function has a derivative at $x_0$ then the function is continuous at $x_0.$ We don't about the derivative at $x_0$. Quote: Originally Posted by DarkFalz Then why did we use a rule that considered that h wont be zero? When evaluating ${\displaystyle\lim _{x \to {x_0}}}f(x)$ we never just let $x=x_0$. That is a fundamental principle of limits. We must look for continuity. When considering the difference quotient $\dfrac{f(x+h)-f(x)}{h}$ we hope that the $h$ divides out. That can only be if $h\ne 0.$ • March 15th 2012, 01:29 PM DarkFalz Re: Notion of limit But after that we place h as 0, so is it valid to place h as 0 or not? • March 15th 2012, 01:47 PM Plato Re: Notion of limit Quote: Originally Posted by DarkFalz But after that we place h as 0, so is it valid to place h as 0 or not? If the that means you are not dividing by h. What we are really saying, h is approximately zero, but not zero. That is a very technical distinction but an important one. • March 15th 2012, 02:13 PM DarkFalz Re: Notion of limit If it is not zero, why do we replace h by 0 in the end? • March 15th 2012, 02:44 PM Plato Re: Notion of limit Quote: Originally Posted by DarkFalz If it is not zero, why do we replace h by 0 in the end? WE DON'T. We simply see what happens for values of h nearly zero. Lazy teachers allow replacement. But it really is not correct. If the difference quotient reduces to a continuous function of h, then it works. It works, but technically not correct. Because a derivative is a limit, we must follow the formal definition of limit. In the definition of ${\displaystyle\lim _{x \to {x_0}}}f(x)$ it is required that $x\ne x_0.$ • March 15th 2012, 04:26 PM DarkFalz Re: Notion of limit If reducing to a continuous function and replacing works, which is the other way to find the limit?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677678346633911, "perplexity": 1059.2412205004694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Cable_theory
Cable theory Figure. 1: Cable theory's simplified view of a neuronal fiber Classical cable theory uses mathematical models to calculate the electric current (and accompanying voltage) along passive[a] neurites, particularly the dendrites that receive synaptic inputs at different sites and times. Estimates are made by modeling dendrites and axons as cylinders composed of segments with capacitances $c_m$ and resistances $r_m$ combined in parallel (see Fig. 1). The capacitance of a neuronal fiber comes about because electrostatic forces are acting through the very thin lipid bilayer (see Figure 2). The resistance in series along the fiber $r_l$ is due to the axoplasm's significant resistance to movement of electric charge. Figure. 2: Fiber capacitance History Cable theory in computational neuroscience has roots leading back to the 1850s, when Professor William Thomson (later known as Lord Kelvin) began developing mathematical models of signal decay in submarine (underwater) telegraphic cables. The models resembled the partial differential equations used by Fourier to describe heat conduction in a wire. The 1870s saw the first attempts by Hermann to model neuronal electrotonic potentials also by focusing on analogies with heat conduction. However, it was Hoorweg who first discovered the analogies with Kelvin's undersea cables in 1898 and then Hermann and Cremer who independently developed the cable theory for neuronal fibers in the early 20th century. Further mathematical theories of nerve fiber conduction based on cable theory were developed by Cole and Hodgkin (1920s–1930s), Offner et al. (1940), and Rushton (1951). Experimental evidence for the importance of cable theory in modeling the behavior of axons began surfacing in the 1930s from work done by Cole, Curtis, Hodgkin, Sir Bernard Katz, Rushton, Tasaki and others. Two key papers from this era are those of Davis and Lorente de Nó (1947) and Hodgkin and Rushton (1946). The 1950s saw improvements in techniques for measuring the electric activity of individual neurons. Thus cable theory became important for analyzing data collected from intracellular microelectrode recordings and for analyzing the electrical properties of neuronal dendrites. Scientists like Coombs, Eccles, Fatt, Frank, Fuortes and others now relied heavily on cable theory to obtain functional insights of neurons and for guiding them in the design of new experiments. Later, cable theory with its mathematical derivatives allowed ever more sophisticated neuron models to be explored by workers such as Jack, Rall, Redman, Rinzel, Idan Segev, Tuckwell, Bell, and Iannella. Deriving the cable equation Note, various conventions of rm exist. Here rm and cm, as introduced above, are measured per membrane-length unit (per meter (m)). Thus rm is measured in ohm·meters (Ω·m) and cm in farads per meter (F/m). This is in contrast to Rm (in Ω·m²) and Cm (in F/m²), which represent the specific resistance and capacitance respectively of one unit area of membrane (in m2). Thus, if the radius, a, of the axon is known,[b] then its circumference is 2πa, and its rm, and its cm values can be calculated as: $r_m = \frac{R_m}{2 \pi a\ }$ (1) $c_m = C_m 2 \pi a\$ (2) These relationships make sense intuitively, because the greater the circumference of the axon, the greater the area for charge to escape through its membrane, and therefore the lower the membrane resistance (dividing Rm by 2πa); and the more membrane available to store charge (multiplying Cm by 2πa). Similarly, the specific resistance, Rl, of the axoplasm allows one to calculate the longitudinal intracellular resistance per unit length, rl, (in Ω·m−1) by the equation: $r_l = \frac{R_l}{\pi a^2\ }$ (3) The greater the cross sectional area of the axon, πa², the greater the number of paths for the charge to flow through its axoplasm, and the lower the axoplasmic resistance. Several important avenues of extending classical cable theory have recently seen the introduction of endogenous structures in order to analyze the effects of protein polarization within dendrites and different synaptic input distributions over the dendritic surface of a neuron. To better understand how the cable equation is derived, first simplify the theoretical neuron even further and pretend it has a perfectly sealed membrane (rm=∞) with no loss of current to the outside, and no capacitance (cm = 0). A current injected into the fiber [c] at position x = 0 would move along the inside of the fiber unchanged. Moving away from the point of injection and by using Ohm's law (V = IR) we can calculate the voltage change as: $\Delta V = -i_l r_l \Delta x\$ (4) Letting Δx go towards zero and having infinitely small increments of x, one can write (4) as: $\frac{\partial V}{\partial x} = -i_l r_l\$ (5) or $\frac{1}{r_l} \frac{\partial V}{\partial x} = -i_l\$ (6) Bringing rm back into the picture is like making holes in a garden hose. The more holes, the faster the water will escape from the hose, and the less water will travel all the way from the beginning of the hose to the end. Similarly, in an axon, some of the current traveling longitudinally through the axoplasm will escape through the membrane. If im is the current escaping through the membrane per length unit, m, then the total current escaping along y units must be y·im. Thus, the change of current in the axoplasm, Δil, at distance, Δx, from position x=0 can be written as: $\Delta i_l = -i_m \Delta x\$ (7) or, using continuous, infinitesimally small increments: $\frac{\partial i_l}{\partial x}=-i_m\$ (8) $i_m$ can be expressed with yet another formula, by including the capacitance. The capacitance will cause a flow of charge (a current) towards the membrane on the side of the cytoplasm. This current is usually referred to as displacement current (here denoted $i_c$.) The flow will only take place as long as the membrane's storage capacity has not been reached. $i_c$ can then be expressed as: $i_c = c_m \frac{\partial V}{\partial t}\$ (9) where $c_m$ is the membrane's capacitance and ${\partial V}/{\partial t}$ is the change in voltage over time. The current that passes the membrane ($i_r$) can be expressed as: $i_r=\frac{V}{r_m}$ (10) and because $i_m = i_r + i_c$ the following equation for $i_m$ can be derived if no additional current is added from an electrode: $\frac{\partial i_l}{\partial x}=-i_m=-\left( \frac{V}{r_m}+c_m \frac{\partial V}{\partial t}\right)$ (11) where ${\partial i_l}/{\partial x}$ represents the change per unit length of the longitudinal current. Combining equations (6) and (11) gives a first version of a cable equation: $\frac{1}{r_l} \frac{\partial ^2 V}{\partial x^2}=c_m \frac{\partial V}{\partial t}+\frac{V}{r_m}$ (12) which is a second-order partial differential equation (PDE). By a simple rearrangement of equation (12) (see later) it is possible to make two important terms appear, namely the length constant (sometimes referred to as the space constant) denoted $\lambda$ and the time constant denoted $\tau$. The following sections focus on these terms. Length constant Main article: Length constant The length constant, $\lambda$ (lambda), is a parameter that indicates how far a stationary current will influence the voltage along the cable. The larger the value of $\lambda$, the farther the charge will flow. The length constant can be expressed as: $\lambda = \sqrt \frac{r_m}{r_l}$ (13) The larger the membrane resistance, rm, the greater the value of $\lambda$, and the more current will remain inside the axoplasm to travel longitudinally through the axon. The higher the axoplasmic resistance, $r_i$, the smaller the value of $\lambda$, the harder it will be for current to travel through the axoplasm, and the shorter the current will be able to travel. It is possible to solve equation (12) and arrive at the following equation (which is valid in steady-state conditions, i.e. when time approaches infinity): $V_x = V_0 e^{-\frac{x}{\lambda}}$ (14) Where $V_0$ is the depolarization at $x=0$ (point of current injection), e is the exponential constant (approximate value 2.71828) and $V_x$ is the voltage at a given distance x from x=0. When $x=\lambda$ then $\frac{x}{\lambda}=1$ (15) and $V_x = V_0 e^{-1}$ (16) which means that when we measure $V$ at distance $\lambda$ from $x=0$ we get $V_\lambda = \frac{V_0}{e} = 0.368 V_0$ (17) Thus $V_\lambda$ is always 36.8 percent of $V_0$. Time constant Main article: Time constant Neuroscientists are often interested in knowing how fast the membrane potential, $V_m$, of an axon changes in response to changes in the current injected into the axoplasm. The time constant, $\tau$, is an index that provides information about that value. $\tau$ can be calculated as: $\tau = r_m c_m \$ (18) . The larger the membrane capacitance, $c_m$, the more current it takes to charge and discharge a patch of membrane and the longer this process will take. Thus membrane potential (voltage across the membrane) lags behind current injections. Response times vary from 1–2 milliseconds in neurons that are processing information that needs high temporal precision to 100 milliseconds or longer. A typical response time is around 20 milliseconds. Generic form and mathematical structure If one multiplies equation (12) by $r_m$ on both sides of the equal sign we get: $\frac{r_m}{r_l} \frac{\partial ^2 V}{\partial x^2}=c_m r_m \frac{\partial V}{\partial t}+ V$ (19) and recognize $\lambda^2 = {r_m}/{r_l}$ on the left side and $\tau = c_m r_m$ on the right side. The cable equation can now be written in its perhaps best known form: $\lambda^2 \frac{\partial ^2 V}{\partial x^2}=\tau \frac{\partial V}{\partial t}+ V$ (20) This is a 1D Heat equation or Diffusion Equation for which many solution methods, such as Green's functions and Fourier methods, have been developed. It is also a special case of the Telegrapher's equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376179933547974, "perplexity": 1221.3125105262536}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835119.25/warc/CC-MAIN-20140820021355-00382-ip-10-180-136-8.ec2.internal.warc.gz"}
http://groupoid.space/mltt/types/pi/
# pi Pi package contains basic theorems about Pi types. Pi type is built in core of any dependent type checker. Type checkers with Pi only type are called Pure Type Systems. See OM language for our Erlang implementation. Pi (A: U) (B: A -> U): U = (x: A) -> B x Pi is a dependent function type, the generalization of functions. As a function it can serve the wide range of mathematical constructions, objects, types, or spaces. The known domain and codomain spaces could be: sets, functions, polynomial functors, $\infty$-groupoids, topological $\infty$-groupoid, cw-complexes, categories, languages, etc. We give here immediate isomorphism of Pi types, the fibrations or fiber bundles. The adjoints Pi and Sigma is not the only adjoints could be presented in type system. Axiomatic cohesions could contain a set of adjoint pairs as a core type checker operations. Geometrically, Pi type is a space of sections, while the dependent codomain is a space of fibrations. Lambda functions are sections or points in these spaces, while the function result is a fibration. Pi type also represents the cartesian family of sets, generalizing the cartesian product of sets. # Definition Definition (Section). A section of morphism $f: A \rightarrow B$ in some category is the morphism $g: B \rightarrow A$ such that $f \circ g: B \mapright{g} A \mapright{f} B$ equals the identity morphism on B. Definition (Fiber). The fiber of the map $p: E \rightarrow B$ in a point $y: B$ is all points $x: E$ such that $p(x)=y$. Definition (Fiber Bundle). The fiber bundle $F \rightarrow E \mapright{p} B$ on a total space $E$ with fiber layer $F$ and base $B$ is a structure $(F,E,p,B)$ where $p: E \rightarrow B$ is a surjective map with following property: for any point $y: B$ exists a neighborhood $U_b$ for which a homeomorphism $f: p^{-1}(U_b) \rightarrow U_b \times F$ making the following diagram commute. Definition (Cartesian Product of Family over B). Is a set $F$ of sections of the bundle with elimination map $app : F \times B \rightarrow E$. such that $pr_1$ is a product projection, so $pr_1$, $app$ are moriphisms of slice category $Set_{/B}$. The universal mapping property of $F$: for all $A$ and morphism $A \times B \rightarrow E$ in $Set_{/B}$ exists unique map $A \rightarrow F$ such that everything commute. So a category with all dependent products is necessarily a category with all pullbacks. Definition (Trivial Fiber Bundle). When total space $E$ is cartesian product $\Sigma(B,F)$ and $p = pr_1$ then such bundle is called trivial $(F,\Sigma(B,F),pr_1,B)$. Definition (Dependent Product). The dependent product along morphism $g: B \rightarrow A$ in category $C$ is the right adjoint $\Pi_g : C_{/B} \rightarrow C_{/A}$ of the base change functor. Definition (Space of Sections). Let $\mathbf{H}$ be a $(\infty,1)$-topos, and let $E \rightarrow B : \mathbf{H}_{/B}$ a bundle in $\mathbf{H}$, object in the slice topos. Then the space of sections $\Gamma_\Sigma(E)$ of this bundle is the Dependent Product: ## Introduction Lambda constructor defines a new lambda closure that could be saved or passed by in context. lambda (A B: U) (b: B): A -> B = \ (x: A) -> b lam (A: U) (B: A -> U) (a: A) (b: B a) : A -> B a = \ (x: A) -> b # Elimination ## Application Application reduces the term by using recursive substitution. apply (A B: U) (f: A -> B) (x: A): B = f(x) app (A: U) (B: A -> U) (a: A) (f: A -> B a): B a = f a ## Composition Composition is using application of appropriate singnatures. ot (A B C: U) : U = (B -> C) -> (A -> B) -> (A -> C) o (A B C: U) : ot A B C = \ (x: A) -> f (g x) O (F G: U -> U) (t: U): U = F (G t) # Computation ## Beta Beta rule shows that composition $lam \circ app$ could be fused. Pi_Beta (A: U) (B: A -> U) (a: A) (f: A -> B a) : Path (B a) (app A B a (lam A B a (f a))) (f a) = refl (B a) (f a) ## Eta Eta rule shows that composition $app \circ lam$ could be fused. Pi_Eta (A: U) (B: A -> U) (a: A) (f: A -> B a) : Path (A -> B a) f (\(x:A) -> f x) = refl (A -> B a) f # Theorems Theorem (Functions Preserve Paths). For a function $f: (x:A) \rightarrow B(x)$ there is an $ap_f : Path_A(x,y) \rightarrow Path_{B(x)}(f(x),f(y))$. This is called application of $f$ to path or congruence property (for non-dependent case — $cong$ function). This property behaves functoriality as if paths are groupoid morphisms and types are objects. Theorem (Trivial Fiber equals Family of Sets). Inverse image (fiber) of fiber bundle $(F,B*F,pr_1,B)$ in point $y:B$ equals $F(y)$. FiberPi (B: U) (F: B -> U) (y: B) : Path U (fiber (Sigma B F) B (pi1 B F) y) (F y) Theorem (Homotopy Equivalence). If fiber space is set for all base, and there are two functions $f,g : (x:A) \rightarrow B(x)$ and two homotopies between them, then these homotopies are equal. setPi (A: U) (B: A -> U) (h: (x: A) -> isSet (B x)) (f g: Pi A B) (p q: Path (Pi A B) f g) : Path (Path (Pi A B) f g) p q Theorem (HomSet). If codomain is set then space of sections is a set. setFun (A B : U) (_: isSet B) : isSet (A -> B) Theorem (Contractability). If domain and codomain is contractible then the space of sections is contractible. piIsContr (A: U) (B: A -> U) (u: isContr A) (q: (x: A) -> isContr (B x)) : isContr (Pi A B) pathPi (A:U) (B:A->U) (f g : Pi A B) : Path U (Path (Pi A B) f g) ((x:A) -> Path (B x) (f x) (g x))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073053598403931, "perplexity": 2310.7852014943483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00267.warc.gz"}
https://collaborate.princeton.edu/en/publications/rapidly-star-forming-galaxies-adjacent-to-quasars-at-redshifts-ex
# Rapidly star-forming galaxies adjacent to quasars at redshifts exceeding 6 R. Decarli, F. Walter, B. P. Venemans, E. Bañados, F. Bertoldi, C. Carilli, X. Fan, E. P. Farina, C. Mazzucchelli, D. Riechers, H. W. Rix, M. A. Strauss, R. Wang, Y. Yang Research output: Contribution to journalArticlepeer-review 125 Scopus citations ## Abstract The existence of massive (1011 solar masses) elliptical galaxies by redshift z ≈ 4 (refs 1, 2, 3; when the Universe was 1.5 billion years old) necessitates the presence of galaxies with star-formation rates exceeding 100 solar masses per year at z > 6 (corresponding to an age of the Universe of less than 1 billion years). Surveys have discovered hundreds of galaxies at these early cosmic epochs, but their star-formation rates are more than an order of magnitude lower. The only known galaxies with very high star-formation rates at z > 6 are, with one exception, the host galaxies of quasars, but these galaxies also host accreting supermassive (more than 109 solar masses) black holes, which probably affect the properties of the galaxies. Here we report observations of an emission line of singly ionized carbon ([C ii] at a wavelength of 158 micrometres) in four galaxies at z > 6 that are companions of quasars, with velocity offsets of less than 600 kilometres per second and linear offsets of less than 100 kiloparsecs. The discovery of these four galaxies was serendipitous; they are close to their companion quasars and appear bright in the far-infrared. On the basis of the [C ii] measurements, we estimate star-formation rates in the companions of more than 100 solar masses per year. These sources are similar to the host galaxies of the quasars in [C ii] brightness, linewidth and implied dynamical mass, but do not show evidence for accreting supermassive black holes. Similar systems have previously been found at lower redshift. We find such close companions in four out of the twenty-five z > 6 quasars surveyed, a fraction that needs to be accounted for in simulations. If they are representative of the bright end of the [C ii] luminosity function, then they can account for the population of massive elliptical galaxies at z ≈ 4 in terms of the density of cosmic space. Original language English (US) 457-461 5 Nature 545 7655 https://doi.org/10.1038/nature22358 Published - May 24 2017 • General ## Fingerprint Dive into the research topics of 'Rapidly star-forming galaxies adjacent to quasars at redshifts exceeding 6'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351778388023376, "perplexity": 2821.653317321983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00732.warc.gz"}
https://projecteuclid.org/euclid.die/1427744101
Differential and Integral Equations Description of the lack of compactness in Orlicz spaces and applications Abstract In this paper, we investigate the lack of compactness of the Sobolev embedding of $H^1(\mathbb R^2)$ into the Orlicz space $L^{{\phi}_p}(\mathbb R^2)$ associated to the function $\phi_p$ defined by $\phi_p(s):={\rm{e}^{s^2}}-\sum_{k=0}^{p-1} \frac{s^{2k}}{k!}$. We also undertake the study of a nonlinear wave equation with exponential growth where the Orlicz norm $\|.\|_{L^{\phi_p}}$ plays a crucial role. This study includes issues of global existence, scattering and qualitative study. Article information Source Differential Integral Equations, Volume 28, Number 5/6 (2015), 553-580. Dates First available in Project Euclid: 30 March 2015
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486379623413086, "perplexity": 314.43603752432773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00464.warc.gz"}
https://www.physicsforums.com/threads/what-kinds-of-particles-can-decay-to-lambda-hyperons.757446/
# What kinds of particles can decay to Lambda hyperons? 1. ### Chenkb 30 Such as ##\Sigma^0 \to \bar{\Lambda}\gamma\gamma##. I want to make a complete collection of all these decay modes, i.e. ##X \to \Lambda / \bar{\Lambda} + \cdots##. At least some major channels. 2,403 3. ### Chenkb 30 Ok, it's easy to find the ##\Lambda## decay mode in PDG, but not so easy to do it inversely I think. If there isn't a ready-made collection, I'll try to search one by one in PDG. Thanks all the same. 18,496 Staff Emeritus Any baryon weighing more than 1115 MeV. Any other particle weighing more than 2230 MeV. 30 Really? ### Staff: Mentor Those numbers are not exact, but yes. As long as conservation laws can be conserved, a decay is possible. Many of them are very unlikely, however. I guess you look for particles that frequently produce Lambdas? Then look for baryons with two light quarks (up/down) and one heavier quark (especially strange and charm, but also bottom). In addition, some B-mesons can decay to lambda+X. Z and W can produce lambdas as well, but those are quite rare. For very high-energetic lambdas, they might be relevant. Where/how do you want to use such a collection? 18,496 Staff Emeritus Not as much as you think. The mean number of lambdas in a Z-decay is about 0.4. 8. ### Chenkb 30 What did you mean by 0.4? ##\frac{\sum Z \to {\Lambda}/\bar{\Lambda} + X}{\sum Z \to X}## = 40% ? 18,496 Staff Emeritus No, I mean that in a sample of a million Z decays there are 400,000 Lambdas. It's a statistical statement, not event by event. 10. ### Hepth 530 If you are looking for things beside short-distance production and the Z production just look at any heavier baryonic resonance and see what can give you the most Lambdas. Your end goal is a [uds]. So you can have some ##[uds]^* \to [uds] + (f \bar{f},\pi \pi, \gamma)## Resonant decays ##[\Lambda^*]## ##[udc] \to [uds] + \left(f \bar{\nu}_f, \pi^{+}, \rho^{+}, etc\right) ## electroweak decays ##[\Lambda_c, \Sigma_c, \Xi_c]## ##[udb]-> [uds] + f \bar{f}## (FCNC, highly suppressed) ##[\Lambda_b, \Sigma_b, \Xi_b]## ### Staff: Mentor I meant Z and W are rare. Even with a branching fraction of 100% they would be a small contribution. At the LHC, the cross-sections are 100nb for the W and 30nb for the Z (ATLAS result at 7 TeV). I didn't find a proper cross-section measurement of the lambda at the LHC, but an approximate number of 100µb given here (note: those lines are "100 events"), and I think this is restricted to the LHCb acceptance - that means the total cross-section is significantly higher. For the total number of lambdas, W and Z contribute less than .1%. That number could be larger for large transverse momentum. For all other accelerators, the energy is lower, and the W/Z cross-section goes down faster than the other production modes, so there the contribution is even smaller (unless you run an electron-positron collider at the Z peak, of course). I found a master thesis with a collection of (predicted) mother particles for Lambdas at LHCb.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377924203872681, "perplexity": 3571.939094311876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466260.18/warc/CC-MAIN-20151124205426-00023-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.chemistrynotesinfo.com/2015/07/types-of-forces-in-chemistry.html
## Different Types of Forces ### Intermolecular Forces: These are forces of attraction and/or repulsion between the interacting particles i.e. atom or molecules. Dutch Scientist J. Van der Waals (1837-1923) explains deviation of the real gases from ideal behavior with intermolecular forces, so intermolecular forces are also called as van der waals forces. Example: Hydrogen bonding which is strong dipole-dipole interaction. ### Dispersion Forces: If an atom gets instantaneous dipole (i.e. Atom has more electron density in right or left hand side) then its nearby atom become induced dipole, so these two temporary dipole attract each other. This attraction force is known as dispersion forces. ·       As these forces were first proposed by F. London so these forces are also known as London forces. ### Dipole-Dipole Forces: This type of force act between the molecules which have permanent dipole. Dipole of these molecule possess some partial charges (denoted by delta that is delta positive or delta negative) Example: HCl molecule, where H possess delta positive and Cl possess delta negative. ### Dipole-Induced Dipole Forces: These attractive forces act between polar and non-polar molecules where polar molecules have permanent dipole, which induced the dipole and non-polar molecule by deforming electronic cloud of non-polar molecule. ·       As polarisability increases, strength of the attractive interaction also increases. #### Hydrogen-Bond: It is a type of dipole-dipole interaction present in molecules with high polar N-H, O-H and H-F bonds. #### Thermal Energy: It is the energy of the body arise due to the motion of its atoms and molecules. ·       Thermal energy is directly proportional to temperature of the substances. ### Intermolecular forces v/s Thermal interactions ·       Intermolecular forces make molecules of the substance keep together. ·       While thermal energy of the substance make molecules keep apart. ·       These two (thermal energy and intermolecular forces) decides collectively the states of matter. ·       If intermolecular forces predominance then Gas->Liquid->Solid ·       If thermal energy predominance then Solid->Liquid->Gas
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702878355979919, "perplexity": 4205.2372952553615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00773.warc.gz"}
https://mathhelpboards.com/threads/itos-lemma.1077/
Ito's Lemma Jason New member Feb 5, 2012 28 I've been at this for ages but I can't make sense of it. Can somebody help me out? Use Ito's Lemma to solve the stochastic differential equation: $$X_t=2+\int_{0}^{t}(15-9X_s)ds+7\int_{0}^{t}dB_s$$ and find: $$E(X_t)$$ chisigma Well-known member Feb 13, 2012 1,704 I've been at this for ages but I can't make sense of it. Can somebody help me out? Use Ito's Lemma to solve the stochastic differential equation: $$X_t=2+\int_{0}^{t}(15-9X_s)ds+7\int_{0}^{t}dB_s$$ and find: $$E(X_t)$$ In 'standard form' the SDE is written as... $\displaystyle d X_{t}= (15-9\ X_{t})\ dt + 7\ dW_{t}\ ,\ x_{0}=2$ (1) The (1) is a linear in narrow sense SDE andits solving procedure has been described in... http://www.mathhelpboards.com/f23/unsolved-statistic-questions-other-sites-part-ii-1566/index2.html#post8411 ... and its solution is... $\displaystyle X_{t}= \varphi_{t}\ \{ x_{0} +\int_{0}^{t} \varphi_{s}^{-1}\ u_{s}\ ds + \int_{0}^{t} \varphi_{s}^{-1}\ v_{s}\ dW_{s} \}$ (2) Here is $a_{t}=-9$ , so that is $\varphi_{t}=e^{-9 t}$,$u_{t}=15$, $v_{t}=7$ and $x_{0}=2$ so that (2) becomes... $\displaystyle X_{t}= e^{- 9 t}\ \{ 2 + 15\ \int_{0}^{t} e^{9 s} ds + 7\ \int_{0}^{t} e^{9 s}\ dW_{s} \}= e^{-9 t}\ \{2 + \frac{5}{3}\ (e^{9 t}-1) + \frac{7}{9}\ (e^{9 W_{t}}-1) - \frac{7}{2}\ (e^{9t}-1) \} =$ $\displaystyle = - \frac{11}{6} + \frac {55}{18}\ e^{-9 t} + \frac{7}{9}\ e^{9\ (W_{t}-t)}$ (3) Kind regards $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783363342285156, "perplexity": 1103.5618765012798}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00328.warc.gz"}
http://perimeterinstitute.ca/fr/seminar/computational-spectroscopy-quantum-field-theories
# Computational Spectroscopy of Quantum Field Theories Quantum field theories play an important role in many condensed matter systems for their description at low energies and long length scales. In 1+1 dimensional critical systems the energy spectrum and the spectrum of scaling dimensions are intimately related in the presence of conformal symmetry. In higher space-time dimensions this relation is more subtle and not well explored numerically. In this talk we motivate and review our recent effort to characterize 2+1 dimensional quantum field theories using computational techniques 2+targetting the energy spectrum on a spatial torus. We discuss several examples ranging from the O(N) Wilson Fisher theories and Gross-Neveu-Yukawa theories to deconfinement- confinement transitions in the context of topological ordered systems. We advocate a phenomenological picture that provides insight into the operator content of the critical field theories. Collection/Series: Event Type: Seminar Scientific Area(s): Speaker(s): Event Date: Mercredi, Mars 14, 2018 - 14:00 to 15:30 Location: Space Room Room #: 400
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884459733963013, "perplexity": 975.7523545046814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823322.49/warc/CC-MAIN-20181210101954-20181210123454-00115.warc.gz"}
http://chasethedevil.github.io/post/put_call_parity_with_log_transformed_pde/
# Put-Call parity and the log-transformed Black-Scholes PDE We will assume zero interest rates and no dividends on the asset $$S$$ for clarity. The results can be easily generalized to the case with non-zero interest rates and dividends. Under those assumptions, the Black-Scholes PDE is: $$\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} = 0.$$ An implicit Euler discretisation on a uniform grid in $$S$$ of width $$h$$ with linear boundary conditions (zero Gamma) leads to: $$V^{k+1}_i - V^{k}_i = \frac{1}{2}\sigma^2 \Delta t S_i^2 \frac{V^{k}_{i+1}-2V^k_{i}+V^{k}_{i-1}}{h^2}.$$ for $$i=1,…,m-1$$ with boundaries $$V^{k+1}_i - V^{k}_i = 0.$$ for $$i=0,m$$. This forms a linear system $$M \cdot V^k = V^{k+1}$$ with $$M$$ is a tridiagonal matrix where each of its rows sums to 1 exactly. Furthermore, the payoff corresponding to the forward price $$V_i = S_i$$ is exactly preserved as well by such a system as the discretized second derivative will be exactly zero. The scheme can be seen as preserving the zero-th and first moments. As a consequence, by linearity, the put-call parity relationship will hold exactly (note that in between nodes, any interpolation used should also be consistent with the put-call parity for the result to be more general). This result stays true for a non-uniform discretisation, and with other finite difference schemes as shown in this paper. It is common to consider the log-transformed problem in $$X = \ln(S)$$ as the diffusion is constant then, and a uniform grid much more adapted to the process. $$\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 \frac{\partial^2 V}{\partial X^2}-\frac{1}{2}\sigma^2 \frac{\partial V}{\partial X} = 0.$$ An implicit Euler discretisation on a uniform grid in $$X$$ of width $$h$$ with linear boundary conditions (zero Gamma in $$S$$ ) leads to: $$V^{k+1}_i - V^{k}_i = \frac{1}{2}\sigma^2 \Delta t \frac{V^{k}_{i+1}-2V^k_{i}+V^{k}_{i-1}}{h^2}-\frac{1}{2}\sigma^2 \Delta t \frac{V^{k}_{i+1}-V^{k}_{i-1}}{2h}.$$ for $$i=1,…,m-1$$ with boundaries $$V^{k+1}_i - V^{k}_i = 0.$$ for $$i=0,m$$. Such a scheme will not preserve the forward price anymore. This is because now, the forward price is $$V_i = e^{X_i}$$. In particular, it is not linear in $$X$$. It is possible to preserve the forward by changing slightly the diffusion coefficient, very much as in the exponential fitting idea. The difference is that, here, we are not interested in handling a large drift (when compared to the diffusion) without oscillations, but merely to preserve the forward exactly. We want the adjusted volatility $$\bar{\sigma}$$ to solve $$\frac{1}{2}\bar{\sigma}^2 \frac{e^{h}-2+e^{-h}}{h^2}-\frac{1}{2}\sigma^2 \frac{e^{h}-e^{-h}}{2h}=0.$$ Note that the discretised drift does not change, only the discretised diffusion term. The solution is: $$\bar{\sigma}^2 = \frac{\sigma^2 h}{2} \coth\left(\frac{h}{2} \right) .$$ This needs to be applied only for $$i=1,…,m-1$$. This is actually the same adjustment as the exponential fitting technique with a drift of zero. For a non-zero drift, the two adjustments would differ, as the exact forward adjustment will stay the same, along with an adjusted discrete drift.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409184455871582, "perplexity": 332.80590465931897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319902.52/warc/CC-MAIN-20170622201826-20170622221826-00126.warc.gz"}
http://ompf2.com/viewtopic.php?t=2108
Confused about near-specular reflection that is not energy conserving. Practical and theoretical implementation discussion. mgamito Posts: 3 Joined: Fri Aug 01, 2014 4:26 pm Confused about near-specular reflection that is not energy conserving. Hello, I've been puzzled by a lighting problem that is giving non-intuitive results. I'm sure the answer must be very simple because this is really basic stuff. Consider a ground plane with a near-specular brdf. The brdf is going to return extremely high values close to the mirror direction that drop off to zero quickly away from that direction. In the limit, a specular mirror would have a delta distribution with a single infinite value along the mirror direction and zero everywhere else. Consider also a point light above the ground with an intenty Li. The reflected radiance Lr on a point x, on the ground, is: Lr(wo) = brdf(wi,wo)*Li/r^2, where r is the distance from x to the point light and wi points towards the light. Now, if the incoming direction wi is nearly aligned with the mirror direction, the brdf term will be huge and we have Lr > Li, which is not energy-conserving. How can this be? The only thing I can think of is that point lights are not real physical lights, whereas a near-specular brdf is physically correct. The two solutions to the problem would then either be: 1- Give the point light a small radius and treat it as a spherical light. 2- If I insist on using the point light, I need to normalise the brdf to make sure it is always less than white. friedlinguini Posts: 89 Joined: Thu Apr 11, 2013 5:15 pm Re: Confused about near-specular reflection that is not energy conserving. BRDFs can have spikes arbitrarily greater than 1 because you use them by integrating over some finite domain, and the integral should give you something less than one (a surface can't reflect more photons than it receives). mgamito wrote:Consider also a point light above the ground with an intenty Li. The reflected radiance Lr on a point x, on the ground, is: Lr(wo) = brdf(wi,wo)*Li/r^2, where r is the distance from x to the point light and wi points towards the light. That formula can't be correct, because it doesn't stand up to dimensional analysis. Lr and Li have the same units, and a BRDF is dimensionless. That leaves units of length^-2 unbalanced. I assume that you have a 1/r^2 factor because you're calculating a direct lighting estimate by integrating over the surface of all lights. That means you need to have a surface area factor in there somewhere, which would make the units come out right. The surface area of a point light approaches zero, and multiplying that into your formula cancels out the infinity that's bothering you. If you were instead dealing with an area light, the brdf would spike at some infinitesimal portion of the light surface, but it would be balanced by near-zero values everywhere else. mgamito Posts: 3 Joined: Fri Aug 01, 2014 4:26 pm Re: Confused about near-specular reflection that is not energy conserving. Hi, I see what you mean. Basically, point lights and other delta lights use units of irradiance, instead of the more conventional area lights that use radiance. As such, delta lights cannot be plugged into the direct lighting equation because they don't obey the correct units. As you suggest, the problem would be solved if I gave the point light a small differential area dA, for instance I could make it be an infinitesimal disk oriented along the wi direction so that the geometry factor would be dA/r^2, instead of just 1/r^2. The infinitesimal dA would then attenuate the near infinity of the brdf near the mirror direction. This basically corresponds to the first solution I initially proposed. Point lights and spot lights are ubiquitous in production environments and they're not likely to go away. A rendering implementation that internally converts delta lights into micro area lights could be interesting to study and it would be energy conserving. My second solution about normalising brdfs in the range black <= brdf <= white for delta lights, on second thought, may not be so interesting because it would break convergence in the limit of an area light that gradually shrinks towards a delta - the moment the area became exactly zero, there would be a discontinuity in the lighting. friedlinguini Posts: 89 Joined: Thu Apr 11, 2013 5:15 pm Re: Confused about near-specular reflection that is not energy conserving. There's no reason to give up on point lights; all I was trying to say is that having a BRDF that spikes above 1 is physically plausible. If a point light is bright enough to visibly illuminate a diffuse surface, then it's going to create a very bright highlight on a near-mirror surface. The intensity can become arbitrarily large, but it's OK because the size of the highlight simultaneously becomes arbitrarily small. The math is still doable, you just have to make sure you that you properly cancel out spiky PDFs, differential areas, etc. papaboo Posts: 46 Joined: Fri Jun 21, 2013 10:02 am Contact: Re: Confused about near-specular reflection that is not energy conserving. Regarding 2. You should never tweak your BRDF based on the light source type. That's a slippery slope. You wouldn't expect your material properties to chance based on what environment its in. Now, your BRDF is going to return values above one in some cases, which as Friedlinguini points out, is okay, because energy conservation means that the integral over the hemisphere is 1 or less, not the individual samples. Ideally every sample would be weighted by 1 or less, but that only happens if you have perfect importance sampling, which you only have for Lambert and specular surfaces. So if you want ot ensure that your BRDF is energy conserving, just integrate over it (using Monte Carlo) and check that it average weight is <= 1. As for you light source issue. Yes, point lights are physically incorrect since the area is 0. But just think of them as an infinitely small area light and use as is. You can even derive a normalization term, such that the overall light emitted from a point/sphere light is radius invariant, even if the radius is 0.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097381591796875, "perplexity": 975.7038117211142}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00218.warc.gz"}
https://www.physicsforums.com/threads/electrodynamics-multipolar-development-doubt.778275/
# Electrodynamics, multipolar development doubt 1. Oct 26, 2014 ### Frank Einstein 1. The problem statement, all variables and given/known data In the XY plane there are two point charges +q at (0, -b, 0) and –q at (0, b, 0) and a ring of radius a centred in the centre of the plane: Find the electric field in all points of the z axis and study the field’s dominant behaviour at distances z >>a,b. Find the electric field at any point of space at big distances by using the two firsts terms of the multipolar development and compare with the previousparagraph. The z axis pops out of the paper and goes up. 2. Relevant equations All bold leters are vectors any x^, y^, z^, is a vector of module 1. p= sum(ri*qi) Φ1=(1/4*π*ε0)*(r*p/r^3) 3. The attempt at a solution Well, I don’t have much trouble with the first part, I find the field of a positive punctual charge at the z axis, same with a negative and by superposing both, I finish with –b/2*π*ε0*(z^2+b^2)^0.5 in the y direction. For the ring, I have (λ*a)/(2*ε0*(z^2+a^2)^(3/2) in the z direction. λ is the linear density of charge At big distances the total field is -b/(2*π*ε0*z)y^+ (λ*a)/(2*ε0*z^2)z^ Trouble comes when I arrive to the second part because when I calculate the three contributions the monopolar, dipolar and quadrupolar. The ring doesn’t produce any multipolar development The first one is 0, the total charge is 0, the second contribution, the dipolar one is pr/(4*π*ε0), p=2bqy^ And last, when I have to find the quadripolar, momentum as (¼*π*ε0)*Σ(Qij*(3xixj-(r^2)*δij)/r^5) I find that Q11=Qxx=0, Qzz=Q33=0 and Q22=Qyy=(1/2)[(-b)(-b)q+(b)(b)(-q)]=0. Meaning that the quadripolar momentum is 0, I know that I cannot go to further terms because the teacher has said that we won’t study these, so I find myself with just one term when in the description of the problem they tell me to use two terms. Last edited: Oct 26, 2014 2. Oct 31, 2014 ### Greg Bernhardt Thanks for the post! Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? 3. Nov 1, 2014 Really?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8361787796020508, "perplexity": 1330.9091077226967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814311.76/warc/CC-MAIN-20180223015726-20180223035726-00328.warc.gz"}
https://www.physicsforums.com/threads/projectile-motion-homerun-question.52358/
Projectile motion homerun question 1. Nov 11, 2004 IGeekbot all the problem gives me is: a batter hits a ball 98 meters for a homerun and he hit it at a 45 degree angle. what is the velocity of the ball off of the bat? assume the fence is at the same height as the balll. how do i get the velocity of this thing? I tried getting the height which would be 49 meters and then figuring the time i would take to fall 49 meters, but I dont think that is right. any ideas? 2. Nov 11, 2004 nolachrymose Try setting up a system of equations, one with respect to the y-component, and one to the x. A little hint: use the fact that sin(45) = cos(45). Hope that helps! :) 3. Nov 12, 2004 BobG You're pretty close to being right. Except you also need to take into account the time it took to get up to 49 meters. In other words, multiply your time by two. Similar Discussions: Projectile motion homerun question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837738037109375, "perplexity": 491.0408643909583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608084.63/warc/CC-MAIN-20170525140724-20170525160724-00435.warc.gz"}
https://thetwomeatmeal.wordpress.com/2010/12/18/symmetric-groups/
# Gracious Living Symmetric Groups December 18, 2010, 11:36 Filed under: Algebra, Math, Uncategorized | Tags: , , , , We’ve seen symmetric groups before.  The symmetric group on an arbitrary set, $S_X$ or ${\rm Sym}(X)$, is the group of bijections from the set to itself.  As usual, we’re only interested in the finite case $S_n$, which we call the symmetric group on $n$ symbols.  These are pretty important finite groups, and so I hope you’ll accept my apology for writing a post just about their internal structure.  The language we use to talk about symmetric groups ends up popping up all the time. The notation we used to describe symmetric groups when I first introduced them was as follows: in $S_4$, for example, $[2314]$ meant to send 1 to 2, 2 to 3, 3 to 1, and 4 to 4.  But there are at least two better ways of describing them.  First is cycle notation.  A cycle is a permutation sending symbols in a circle: $x_1$ to $x_2$, $x_2$ to $x_3$, …, $x_k$ to $x_1$, with no other symbols affected.  We’ll write cycles with parentheses: the above would be $(x_1x_2\dotsm x_k)$, $(12)$ is the generator of $S_2$, and so on.  We can multiply cycles just as we do permutations: $(123)(45)$ is an element of $S_5$ (or of any $S_n$ with $n\ge 5$, it isn’t entirely clear) which rotates the first three symbols and flips the fourth and fifth.  Its order is $6$, which is the lowest common multiple (lcm) of the lengths of the cycles forming it.  (Prove this.) Clearly, disjoint cycles, those that don’t share any symbols, commute: $(123)(45)=(45)(123)$.  Also, though it’s customary to write a cycle beginning with the first symbol possible, the symbols in a cycle can be rotated without changing its meaning: $(123)=(231)=(312)$.  But, on the other hand, $(132)=(213)=(321)=(123)^{-1}$.  So all we can do is re-cycle an individual cycle, not permute it without rhyme or reason. • Theorem.  Every permutation can be written as a product of disjoint cycles, and this is unique up to ordering of the cycles and re-cycling of each individual one.  (The identity is the product of zero cycles.) Proof.  Let $p$ be our permutation, and let $P\subset S_n$ be the cyclic group generated by it.  Then $P$ also acts on the set of symbols $\{1,\dotsc,n\}$.  For each $1\le k\le n$, find the image of $k$ under successive applications of $p$, and write them out in order: clearly, this is one cycle (it’s of finite length because $P$ is finite).  But also, this is some ordering of the orbit of $k$ under $P$.  Since distinct orbits are disjoint, we can choose the cycle corresponding to one element of each orbit, multiply them together, and we’ll have an expression for $p$ as a product of disjoint cycles.  If we choose a different element from an orbit, this necessarily corresponds to a re-cycling of the corresponding cycle. $\square$ Now let’s talk about transpositions.  A transposition is a cycle of length $2$.  Clearly, we can write permutations as products of transpositions: $(123)=[231]=(12)(13)$, where, as usual, a product of cycles is applied right to left, like a composition of functions.  In fact, we can even use transpositions of adjacent symbols — $(13)=(23)(12)(23)$.  It follows that the $n-1$ transpositions $(12),(23),\dotsc,((n-1)(n))$ are generators for the group.  What are the relations?  Letting $\tau_i=((i)(i+1))$, it’s clear that disjoint $\tau_i$ commute, that each $\tau_i$ is order $2$, and that conjugating $\tau_i$ by $\tau_{i+1}$ is the same as conjugating $\tau_{i+1}$ by $\tau_i$: both give you $((i)(i+2))$.  So we have at least the three relations: • $\tau_i^2=e$ • $\tau_i\tau_j=\tau_j\tau_i$ for $|i-j|>1$ • $\tau_{i+1}\tau_i\tau_{i+1}=\tau_i\tau_{i+1}\tau_i$ for $|i-j|=1$ Clearly $S_n$ satisfies these relations.  But does the above presentation generate $S_n$?  In fact, they do.  Here’s a somewhat lengthy proof.  Let $T_n$, say, be the group generated by this presentation, that is, the group $\langle t_1,\dotsc,t_{n-1}|t_i^2,t_it_jt_i^{-1}t_j^{-1},t_{i+1}t_it_{i+1}t_i^{-1}t_{i+1}^{-1}t_i^{-1}\rangle$ where $|i-j|$ is always at least $2$.  Since $S_n$ satisfies the same relations as $T_n$, there’s a surjective homomorphism $T_n\rightarrow S_n$ with $t_i\mapsto \tau_i$ for each $i$.  To prove it’s an isomorphism, it remains to be shown that it’s injective, or equivalently, that $|T_n|=|S_n|=n!$.  (Of course, as we discussed, it isn’t even clear from an arbitrary generators-and-relations presentation if the given group is finite.) Since all the generators are order 2, we don’t need the inverse symbols at all: an arbitrary element of $T_n$ can be written as a word in $t_1,\dotsc,t_{n-1}$ without using their inverses.  By induction, let’s prove that $t_{n-1}$ only has to occur once in this word.  Clearly this holds for $S_1=\{e\},S_2=\{e,t_1\}$.  Assume it holds for $T_{n-1}$, and let $w$ be a word in $T_n$ with two occurrences of $t_{n-1}$.  Look at the word between them.  If it doesn’t contain $t_{n-2}$, then they commute with it, so we can scoot one of them through to the other side and cancel it out.  If it does, then by the induction hypothesis, it only occurs once, so we can scoot both copies of $t_{n-1}$ to either side of it, and change the resulting $t_{n-1}t_{n-2}t_{n-1}$ to $t_{n-2}t_{n-1}t_{n-2}$.  Clearly, we can repeat these operations until we only have one $t_{n-1}$. The final step is to look at the following sets: $\displaystyle U_1 = \{e,t_1\}$ $\displaystyle U_2 = \{e,t_2,t_2t_1\}$ $\displaystyle \dotsc$ $\displaystyle U_{n-1}=\{e,t_{n-1},t_{n-1}t_{n-2}, t_{n-1}t_{n-2}\dotsm t_1\}$ By induction, we’re going to prove that $T_n\subset U_1U_2\dotsm U_{n-1}$ (any element of $T_n$ can be written as a product of one element from each set, in that order).  Again, this is true for $T_1,T_2$.  Suppose it holds for $T_{n-1}$.  Given $g\in T_n$, we can use the above to write it with a single $t_{n-1}$, $g=w_1t_{n-1}w_2$, where $w_1,w_2$ are words not containing $t_{n-1}$.  By the induction hypothesis, we can write $w_2$ as a product $u_1u_2\dotsm u_{n-2},u_i\in U_i$ for all $i$.  Obviously $t_{n-1}$ commutes with all $u_i$ but the last, so we can write $g=w_1u_1u_2\dotsm t_{n-1}u_{n-2}$.  But $t_{n-1}u_{n-2}\in U_{n-1}$.  Meanwhile, $w_1u_1\dotsm u_{n-3}$ is in $T_{n-1}$, so by the induction hypothesis, we can write it as a product $v_1\dotsm v_{n-2},v_i\in U_i$.  Multiplying the two halves together again, we get what we asked for. Why were we doing this?  Because $|U_i|=i+1$, and the above gives a surjective function (not a homomorphism, the $U_i$ aren’t groups) from $U_1\times\dotsb\times U_{n-1}$ to $T_n$.  Since the cardinality of the first set is $n!$, $|T_n|\le n!$.  But we also have a surjective homomorphism $T_n\rightarrow S_n$, so $|T_n|\ge n!$.  Taken together, these facts give $|T_n|=n!,T_n=S_n$.  $\square$ Unfortunately, there’s no unique or standard way to write a permutation as a product of adjacent transpositions.  But the generators-and-relations presentation suggests something.  See, applying the first relation to a word changes its number of $\tau_i$ by two, and applying either of the other relations keeps the number the same.  So if two words represent the same group element, they must differ by an even number of generators!  Thus, whether a permutation takes an even or odd number of generators is invariant of how we write that permutation — we call this the sign of the permutation, ${\rm sgn}(p)$.  Note that all transpositions are odd (prove this yourself), so we can find the sign by writing a permutation in terms of arbitrary transpositions rather than adjacent ones. Oh, and it should also be pretty easy to see that $\rm sgn$ is a group homomorphism to $\mathbb{Z}/2$, where $0$ is “even” and $1$ is “odd.”  Its kernel is therefore a normal subgroup (actually, any subgroup of index 2 is normal — see why?).  We call this the alternating group $A_n$.  These guys show up all the time — $A_4$ is the orientation-preserving symmetry group of a tetrahedron, since you can rotate three vertices and leave one fixed, or flip two pairs of two vertices, but you can’t flip just two vertices without employing a reflection.  We also talked about $A_5$ as the orientation-preserving symmetry group of a dodecahedron or icosahedron. A final word about conjugacy.  Given a cycle $q=(a_1a_2\dotsm a_k)$, the result of conjugating by a permutation $p$ is $pqp^{-1}=(p(a_1)p(a_2)\dotsm p(a_k))$, as you should check.  Conversely, any two cycles of the same length are conjugate: if they are $(a_1a_2\dotsm a_k), (b_1b_2\dotsm b_k)$, then we just need a permutation that sends $a_i$ to $b_i$ for each $i$, which is easy enough.  This reinforces the heuristic that conjugacy is just sort of “change of basis”.  But now look: by induction, any two permutations are conjugate in $S_n$ iff when written in disjoint cycle form, they have cycles of the same length.  If we write out permutations with “1-cycles” of the letters they don’t fix (so $e=(1)(2)\dotsm (n)$), then the number of conjugacy classes of $S_n$ is equal to the number of partitions of $n$: for example, for $4$, we have $1+1+1+1$ (the identity), $1+1+2,2+2,1+3,4$.  We can use the binomial theorem to compute the class equation; for $S_4$: $\displaystyle 4!=24=1+6+3+8+6$ Hope you enjoyed! Advertisements Leave a Comment so far Leave a comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 137, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9588813185691833, "perplexity": 320.0445888127503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573309.22/warc/CC-MAIN-20190918151927-20190918173927-00401.warc.gz"}
http://eprints.ma.man.ac.uk/1333/
You are here: MIMS > EPrints MIMS EPrints ## 2009.71: Convolutions of Cantor measures without resonance 2009.71: Fedor Nazarov, Yuval Peres and Pablo Shmerkin (2009) Convolutions of Cantor measures without resonance. Full text available as: PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader244 Kb ## Abstract Denote by $\mu_a$ the distribution of the random sum $(1-a) \sum_{j=0}^\infty \omega_j a^j$, where $P(\omega_j=0)=P(\omega_j=1)=1/2$ and all the choices are independent. For $0<a<1/2$, the measure $\mu_a$ is supported on $C_a$, the central Cantor set obtained by starting with the closed united interval, removing an open central interval of length $(1-2a)$, and iterating this process inductively on each of the remaining intervals. We investigate the convolutions $\mu_a * (\mu_b \circ S_\lambda^{-1})$, where $S_\lambda(x)=\lambda x$ is a rescaling map. We prove that if the ratio $\log b/\log a$ is irrational and $\lambda\neq 0$, then $D(\mu_a *(\mu_b\circ S_\lambda^{-1})) = \min(\dim_H(C_a)+\dim_H(C_b),1),$ where $D$ denotes any of correlation, Hausdorff or packing dimension of a measure. We also show that, perhaps surprisingly, for uncountably many values of $\lambda$ the convolution $\mu_{1/4} *(\mu_{1/3}\circ S_\lambda^{-1})$ is a singular measure, although $\dim_H(C_{1/4})+\dim_H(C_{1/3})>1$ and $\log (1/3) /\log (1/4)$ is irrational. Item Type: MIMS Preprint CICADA, correlation dimension, convolutions, self-similar measures, resonance MSC 2000 > 28 Measure and integrationMSC 2000 > 42 Fourier analysis 2009.71 Mr Pablo Shmerkin 14 October 2009
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713099002838135, "perplexity": 925.4314953844889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00172-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/59383-two-sides-square-divided-circle-print.html
# Two Sides of a Square Divided by a Circle • November 13th 2008, 09:05 AM gfreeman75 Two Sides of a Square Divided by a Circle Here's a problem I'm having that involves a square and a circle. The square ABCD is partially within the circle so that the side AB of the square is also a chord of the circle and the side DC is a tangent of the circle. Here's an illustration of the problem in case my description doesn't convey it clearly enough: The circumference of the circle divides the sides AD and BC by a certain ratio which looking at the image might be 1 : 3. How do I determine for certain what this ratio is? • November 13th 2008, 09:56 AM TwistedOne151 Let point O be the center of the circle, M the midpoint of line segment AB, and N the midpoint of CD (the point of tangency). Draw the line segment MN; O will lie on this line. Let us define s as the length of the sides of the square, and r as the radius of the circle. Then the length of MN is s, and the lengths of AO, BO, and NO are all r. Now, define x to be the length of MO. Thus as MO+NO=MN, x=s-r. However, looking at the right triangle AMO, we see $r^2=\left(\frac{s}{2}\right)^2+x^2$ Substituting in x=s-r: $r^2=\left(\frac{s}{2}\right)^2+(s-r)^2$ $r^2=\left(\frac{s}{2}\right)^2+s^2-2rs+r^2$ $0=\frac{5}{4}s^2-2rs$ $r=\frac{5}{8}s$ And so $x=s-r=\frac{3}{8}s$ Now, note that the length of the portion of AD (or BC) inside the circle is 2x, and thus the portion outside is s-2x. The result above then gives them to be $2x=\frac{3}{4}s$, $s-2x=\frac{1}{4}s$, for the 3:1 ratio you observed in your drawing. --Kevin C.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668519496917725, "perplexity": 367.6086281114248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913406.61/warc/CC-MAIN-20151001221833-00178-ip-10-137-6-227.ec2.internal.warc.gz"}
http://www.emathematics.net/fracciones.php?frac=3
User: • Matrices • Algebra • Geometry • Graphs and functions • Trigonometry • Coordinate geometry • Combinatorics Suma y resta Producto por escalar Producto Inversa Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations 2-D Shapes Areas Pythagorean Theorem Distances Graphs Definition of slope Positive or negative slope Determine slope of a line Equation of a line Equation of a line (from graph) Quadratic function Parallel, coincident and intersecting lines Asymptotes Limits Distances Continuity and discontinuities Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines Equations of a straight line Parallel, coincident and intersecting lines Distances Angles in space Inner product Fractions Reducing to the Least Common Denominator To reduce a fraction to the Least Common Denominator, you must factor each of the denominators into primes. Then for each different prime number in all of the factorizations, do the following... 1. Count the number of times each prime number appears in each of the factorizations. 2. For each prime number, take the largest of these counts and write the result. 3. The least common denominator is the product of all the prime numbers written down. Reduce to the Least Common Denominator $\fs2\frac{3}{4}\;and\;\frac{5}{6}$ Note that l.c.d (4,6)=12. Then for each fraction, divide the denominator and multiply the fraction by the result. Look at the first fraction $\fs2\frac{12}{4}=3\;\;\;\frac{3\cdot3}{4\cdot3}=\frac{9}{12}$ and the second one $\fs2\frac{12}{6}=2\;\;\;\frac{5\cdot2}{6\cdot2}=\frac{10}{12}$ The result is: $\fs2\frac{9}{12}\;and\;\frac{10}{12}$ Reduce $\frac{3}{4}$ and $\frac{6}{2}$ to the Least Common Denominator Solution: and
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 6, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708385467529297, "perplexity": 1609.4020302493143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00465.warc.gz"}
http://math.stackexchange.com/questions/55402/test-for-intersection-of-two-n-dimensional-ellipsoids
# Test for intersection of two N-dimensional ellipsoids Let's say I have two $N$-dimensional ellipsoids: $$\sum_{i=1}^{N} \frac{(x_i - b_i)^2}{c_i^2} = 1$$ $$\sum_{i=1}^{N} \frac{(x_i - b'_i)^2}{c_i'^2} = 1$$ How can I tell if the two intersect? Is there a computationally easy way to do this test? - So far I've been able to reduce the general problem down to figuring out when an ellipsoid intersects the unit sphere, but I don't know how to finish from there. –  anon Aug 3 '11 at 19:59 I asked myself whether it is the case that if they intersect, then the line segment between their two centers lies entirely within the union of their interiors and boundaries. And the answer to that is no: in some cases it passes through regions in the intersection of their exteriors. –  Michael Hardy Aug 3 '11 at 23:30 I think you can reduce this to a convex optimization problem. The problem is a feasibility problem in which you ask if there is a point $x$ such that $(x-x_a)^T A (x-x_a) \le 1$ and $(x-x_b)^T B (x-x_b) \le 1$, where $A$ and $B$ are the quadratic forms related to the ellipsoids, and $x_a$ and $x_b$ are their centers. These types of problems can be solved efficiently, but typically require a large programming library for implementing them. - There are two cases: (a) One ellipsoid is fully contained in the other. (b) I think if (a) is not true, then there are always border points from both ellipses in the intersection set. So you can search for strict equality: $$\sum_{i=1}^{N} \frac{(x_i - b_i)^2}{c_i^2} = 1$$ $$\sum_{i=1}^{N} x_i^2 = 1$$ (here I simplified by linear transform, so that the second ellipse is a unit n-sphere). Then you can use $$x_1 = \sqrt{1 - \sum_{i=2}^{N} x_i^2}$$ from the second equation in the first yielding $$\sum_{i=2}^{N} \frac{(x_i - b_i)^2}{c_i^2} + \frac{(\sqrt{1 - \sum_{i=2}^{N} x_i^2} - b_1)^2}{c_1^2} = 1$$ This equation has n-1 variables, but is not a polynomial. Expanding the second term, and multiplying the whole thing by $c_1^2$ $$c_1^2 \cdot \sum_{i=2}^{N} \frac{(x_i - b_i)^2}{c_i^2} + 1 - \sum_{i=2}^{N} x_i^2 - 2\cdot \sqrt{1 - \sum_{i=2}^{N} x_i^2}\cdot b_1 + b_1^2 = c_1^2$$ $$\left( \frac{c_1^2 \cdot \sum_{i=2}^{N} \frac{(x_i - b_i)^2}{c_i^2} + 1 - \sum_{i=2}^{N} x_i^2 + b_1^2 - c_1^2}{2\cdot b_1}\right)^2 = 1 - \sum_{i=2}^{N} x_i^2$$ The last equation is equivalent to finding a root of a n-1 dimensional polynomial. If (a) is the case, then there are no solutions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155046105384827, "perplexity": 157.7058793090904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00273-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/179201-derivative-question-ln.html
1. ## derivative question (ln) Hello, I have the equation below which I want to find the derivative of: y=ln(x^2+1) I understand that I have to proceed to: 1/x^2+1 x (2x) (using chain rule) I understand that, but am not clear on which two parts will comprise the chain rule here and how the 'ln' is utilised. Can someone please show me the distinct steps to get to this stage. Thanks kindly for any help. 2. It's actually 2x[1/(x^2 + 1)], you MUST use your brackets correctly, and don't use x to mean both the letter x and times. Anyway, your function is y = ln(x^2 + 1). If you let u = x^2 + 1 then y = ln(u). What is du/dx? What is dy/du? Once you have those, what is dy/dx? 3. y is a composite function: y = fog(x). f(x) = ln(x) g(x) = x^2+1 so the deriviative is f'(g(x))(g'(x)). f'(x) = 1/x, and g'(x) = 2x. so f'(g(x)) = 1/(g(x)). any clearer now?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569231629371643, "perplexity": 1882.6552002801789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00505.warc.gz"}
http://www.ck12.org/book/CK-12-Algebra-I-Concepts-Honors/r11/section/7.12/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 7.12: Division of a Polynomial by a Monomial Difficulty Level: Advanced Created by: CK-12 Estimated24 minsto complete % Progress Practice Division of Polynomials Progress Estimated24 minsto complete % Can you divide the polynomial by the monomial? How does this relate to factoring? \begin{align*}4e^4+6e^3-10e^2 \div 2e\end{align*} ### Guidance Recall that a monomial is an algebraic expression that has only one term. So, for example, \begin{align*}x\end{align*}, 8, –2, or \begin{align*}3ac\end{align*} are all monomials because they have only one term. The term can be a number, a variable, or a combination of a number and a variable. A polynomial is an algebraic expression that has more than one term. When dividing polynomials by monomials, it is often easiest to separately divide each term in the polynomial by the monomial. When simplifying each mini-division problem, don't forget to use exponent rules for the variables. For example, \begin{align*}\frac{8x^5}{2x^3}=4x^2\end{align*}. Remember that a fraction is just a division problem! #### Example A What is \begin{align*}(14s^2-21s+42)\div(7)\end{align*}? Solution: This is the same as \begin{align*}\frac{14s^2-21s+42}{7}\end{align*}. Divide each term of the polynomial numerator by the monomial denominator and simplify. • \begin{align*}\frac{14s^4}{7}=2s^4\end{align*} • \begin{align*}\frac{-21s}{7}=-3s\end{align*} • \begin{align*}\frac{42}{7}=6\end{align*} Therefore, \begin{align*}(14s^2-21s+42)\div(7)=2s^4-3s+6\end{align*}. #### Example B What is \begin{align*}\frac{3w^3-18w^2-24w}{6w}\end{align*}? Solution: Divide each term of the polynomial numerator by the monomial denominator and simplify. Remember to use exponent rules when dividing the variables. • \begin{align*}\frac{3w^3}{6w}=\frac{w^2}{2}\end{align*} • \begin{align*}\frac{-18w^2}{6w}=-3w\end{align*} • \begin{align*}\frac{-24w}{6w}=-4\end{align*} Therefore, \begin{align*}\frac{3w^3-18w^2-24w}{6w}=\frac{w^2}{2}-3w-4\end{align*}. #### Example C What is \begin{align*}(-27a^4b^5+81a^3b^4-18a^2b^3)\div(-9a^2b)\end{align*}? Solution: This is the same as \begin{align*}\frac{-27a^4b^5+81a^3b^4-18a^2b^3}{-9a^2b}\end{align*}. Divide each term of the polynomial numerator by the monomial denominator and simplify. Remember to use exponent rules when dividing the variables. • \begin{align*}\frac{-27a^4b^5}{-9a^2b}=3a^2b^4\end{align*} • \begin{align*}\frac{81 a^3b^4}{-9a^2b}=-9ab^3\end{align*} • \begin{align*}\frac{-18a^2b^3}{-9a^2b}=2b^2\end{align*} Therefore, \begin{align*}(-27a^4b^5+81a^3b^4-18a^2b^3) \div (-9a^2b)=3a^2b^4-9ab^3+2b^2\end{align*}. #### Concept Problem Revisited Can you divide the polynomial by the monomial? How does this relate to factoring? \begin{align*}4e^4+6e^3-10e^2 \div 2e\end{align*} This process is the same as factoring out a \begin{align*}2e\end{align*} from the expression \begin{align*}4e^4+6e^3-10e^2\end{align*}. • \begin{align*}\frac{4 e^4}{2e}=2e^3\end{align*} • \begin{align*}\frac{6e^3}{2e}=3e^2\end{align*} • \begin{align*}\frac{-10e^2}{2e}=-5e\end{align*} Therefore, \begin{align*}4e^4+6e^3-10e^2 \div 2e=2e^3+3e^2-5e\end{align*}. ### Vocabulary Divisor A divisor is the expression in the denominator of a fraction. Monomial A monomial is an algebraic expression that has only one term. \begin{align*}x\end{align*}, 8, –2, or \begin{align*}3ac\end{align*} are all monomials because they have only one term. Polynomial A polynomial is an algebraic expression that has more than one term. ### Guided Practice Complete the following division problems. 1. \begin{align*}(3a^5-5a^4+17a^3-9a^2)\div(a)\end{align*} 2. \begin{align*}(-40n^3-32n^7+88n^{11}+8n^2)\div(8n^2)\end{align*} 3. \begin{align*}\frac{16m^6-12m^4+4m^2}{4m^2}\end{align*} 1. \begin{align*}(3a^5-5a^4+17a^3-9a^2) \div (a)=3a^4-5a^3+17a^2-9a\end{align*} 2. \begin{align*}(-40n^3-32n^7+88n^{11}+8n^2)\div(8n^2)=-5n-4n^5+11n^9+1\end{align*} 3. \begin{align*}\frac{(16m^6-12m^4+4m^2)}{(4m^2)}=4m^4-3m^2+1\end{align*} ### Practice Complete the following division problems. 1. \begin{align*}(6a^3+30a^2+24a) \div 6\end{align*} 2. \begin{align*}(15b^3+20b^2+5b) \div 5\end{align*} 3. \begin{align*}(12c^4+18c^2+6c) \div 6c\end{align*} 4. \begin{align*}(60d^{12}+90d^{11}+30d^8) \div 30d\end{align*} 5. \begin{align*}(33e^7+99e^3+22e^2) \div 11e\end{align*} 6. \begin{align*}(-8a^4+8a^2) \div (-4a)\end{align*} 7. \begin{align*}(-3b^4+6b^3-30b^2+15b) \div (-3b)\end{align*} 8. \begin{align*}(-40c^{12}-20c^{11}-25c^9-30c^3) \div 5c^2\end{align*} 9. \begin{align*}(32d^{11}+16d^7+24d^4-64d^2) \div 8d^2\end{align*} 10. \begin{align*}(14e^{12}-18e^{11}-12e^{10}-18e^7) \div -2e^5\end{align*} 11. \begin{align*}(18a^{10}-9a^8+72a^7+9a^5+3a^2) \div 3a^2\end{align*} 12. \begin{align*}(-24b^9+42b^7+42b^6) \div -6b^3\end{align*} 13. \begin{align*}(24c^{12}-42c^7-18c^6) \div -2c^5\end{align*} 14. \begin{align*}(14d^{12}+21d^9+42d^7) \div -7d^4\end{align*} 15. \begin{align*}(-40e^{12}+30e^{10}-10e^4+30e^3+80e) \div -10e^2\end{align*} ### Vocabulary Language: English Denominator Denominator The denominator of a fraction (rational number) is the number on the bottom and indicates the total number of equal parts in the whole or the group. $\frac{5}{8}$ has denominator $8$. Dividend Dividend In a division problem, the dividend is the number or expression that is being divided. divisor divisor In a division problem, the divisor is the number or expression that is being divided into the dividend. For example: In the expression $152 \div 6$, 6 is the divisor and 152 is the dividend. Polynomial long division Polynomial long division Polynomial long division is the standard method of long division, applied to the division of polynomials. Rational Expression Rational Expression A rational expression is a fraction with polynomials in the numerator and the denominator. Rational Root Theorem Rational Root Theorem The rational root theorem states that for a polynomial, $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$, where $a_n, a_{n-1}, \cdots a_0$ are integers, the rational roots can be determined from the factors of $a_n$ and $a_0$. More specifically, if $p$ is a factor of $a_0$ and $q$ is a factor of $a_n$, then all the rational factors will have the form $\pm \frac{p}{q}$. Remainder Theorem Remainder Theorem The remainder theorem states that if $f(k) = r$, then $r$ is the remainder when dividing $f(x)$ by $(x - k)$. Synthetic Division Synthetic Division Synthetic division is a shorthand version of polynomial long division where only the coefficients of the polynomial are used. Show Hide Details Description Difficulty Level: Authors: Tags: Subjects: Search Keywords: Date Created: Apr 30, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 51, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 32, "texerror": 0, "math_score": 0.9410946369171143, "perplexity": 3024.223867819295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.26/warc/CC-MAIN-20160524002128-00211-ip-10-185-217-139.ec2.internal.warc.gz"}
https://indico.cern.ch/event/284464/
TH String Theory Seminar # Infinite Chiral Symmetry in Four Dimensions ## by Balt Van Rees Tuesday, 1 April 2014 from to (Europe/Zurich) at CERN ( 4-3-006 - TH Conference Room ) Description We describe a new correspondence between four-dimensional conformal field theories with extended supersymmetry and two-dimensional chiral algebras. The meromorphic correlators of the chiral algebra compute correlators in a protected sector of the four-dimensional theory. Infinite chiral symmetry has far-reaching consequences for the spectral data, correlation functions, and central charges of any four-dimensional theory with N=2 superconformal symmetry. Organised by James Drummond
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8379647135734558, "perplexity": 3086.182399631318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304444.86/warc/CC-MAIN-20150323172144-00160-ip-10-168-14-71.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/37519/calculating-bond-dissociation-enthalpy-problem-confusion
# Calculating bond dissociation enthalpy problem confusion In our chemical bonding class we were given the following problem: I am confused by how the question says that you need to consider the heat of vaporization/sublimation for HBr and HI. Bond dissociation enthalpy is defined as "the energy needed to break one mole of the bond to give separated atoms - everything being in the gas state.", so if you can find the heat of formation of that molecule in the gas phase, why would you need anything else? My attempt: The answer obtained is not the answer provided by the professor so I assume I'm missing something here. Answer provided is 364 kJ/mol Thanks! Well, I think that you have to look up the actual way bond dissociation enthalpy is defined in your textbook/lecture notes, since it is a not quite standard term. IUPAC defines only bond-dissociation energy but not enthalpy. bond-dissociation energy, $D$ The enthalpy (per mole) required to break a given bond of some specific molecular entity by homolysis, e.g. for $\ce{CH4 -> .CH3 + H.}$, symbolized as $D(\ce{CH3−H})$ (cf. heterolytic bond dissociation energy). Many resources on the Internet claim that the IUPAC definition of the bond-dissociation energy refers to 0 K (see e.g. Wikipedia, UC Davis ChemWiki) and that it differs from bond dissociation enthalpy, which refers to the enthalpy change at room temperature (298K). The bond-dissociation energy is sometime also called the bond-dissociation enthalpy (or bond enthalpy), but these terms are not strictly equivalent, as they refer to the above reaction enthalpy at standard conditions, and differ from $D_0$ by about 1.5 kcal/mol (6 kJ/mol) in the case of a bond to hydrogen in a large organic molecule. Officially, the IUPAC definition of bond dissociation energy refers to the energy change that occurs at 0 K, and the symbol is $D_0$. However, it is commonly referred to as BDE, the bond dissociation energy, and it is generally used, albeit imprecisely, interchangeably with the bond dissociation enthalpy, which generally refers to the enthalpy change at room temperature (298K). Although there are technically differences between BDEs at 0 K and 298 K, those difference are not large and generally do not affect interpretations of chemical processes. If we follow this definitions, then yeah, at standard conditions $\ce{HBr}$ is liquid so we have to add up 12 kJ/mol for its enthalpy of vaporization, and (almost) get the desired 364 kJ/mol. The small discrepancy is probably due to slightly wrong value of enthalpy of vaporization which I just quickly found somewhere.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418459296226501, "perplexity": 741.571026961078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998943.53/warc/CC-MAIN-20190619083757-20190619105757-00023.warc.gz"}
https://www.physicsforums.com/threads/advice-on-non-linear-optimization-methods.535304/
# Advice on Non-Linear Optimization Methods • #1 41 0 Hi all, Hopefully this is the right section for my post, if not I apologize. I'm hoping I can just get some advice to help me get started in the right direction. I am trying to do a mathematical inversion for the following: $$\frac{1}{N(zi)} \frac{dN}{dz}|_{z=zi} = -\frac{2}{zi} - \frac{1}{T(zi)} \frac{dT}{dz}|_{z=zi} - \frac{C}{T(zi)}$$ $$N(zi)$$ are measurements made at a series of altitudes. There is the above relation between measurements and temperature $$T(zi)$$ So temperature is what I am looking to find from the $$N(zi)$$ measurements. C is just a system constant. What I am trying to do is guess an initial temperature vector, then minimize the $$\chi^{2}$$ between the measurements and the forward model above. So that once chi square is minimized as much as possible, we can determine the temperatures. I am very new to this, but have done some research into optimization and grid search methods. I was looking in the Marquardt Method listed in Numerical Recipes, but I figured I would post here to see if anyone else has any opinions on what would work the best. If you guys need any more info just let me know. Thanks! • #2 155 24 • Last Post Replies 3 Views 2K • Last Post Replies 10 Views 5K • Last Post Replies 7 Views 894 • Last Post Replies 4 Views 2K • Last Post Replies 3 Views 1K • Last Post Replies 4 Views 3K • Last Post Replies 2 Views 2K • Last Post Replies 1 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 2 Views 2K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790391683578491, "perplexity": 661.4045876783314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00706.warc.gz"}
https://math.stackexchange.com/questions/2978146/showing-that-the-diagonal-of-x-times-x-is-transversal-to-the-graph-of-f-1
# Showing that the diagonal of $X\times X$ is transversal to the graph of $f$. (1.5.10 Guillemin and Pollack) The question and its answer is given below: But I am wondering, is it also correct if I showed that graph f is transversal to diagonal of $$X\times X$$? Also, I can not understand the general idea he using in his proof in the last paragraph, could anyone explain this for me please? EDIT: These are the exercises referred to: Thank you!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9276966452598572, "perplexity": 299.44156826322745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394074.44/warc/CC-MAIN-20200527110649-20200527140649-00560.warc.gz"}
http://tex.stackexchange.com/questions/22156/beamer-presentation-with-page-containing-latex-output
Beamer presentation with page containing Latex output Latex newbie here. Let's say I create a page with some Latex code, and I want to have the following beamer slide contain the output of that code. how can I do that? \begin{frame}[fragile]{How does \mbox{\LaTeX} work?} \begin{verbatim} \title{My R Poem} \author{Dude} \begin{document} R is my favoritest. \end{document} \end{verbatim} \end{frame} I assume there's a package for this, but I'm not aware of it and can't find it via google. - Do you want to show code and result side-by-side or just the output? –  Martin Scharrer Jul 2 '11 at 18:06 I'd have the code on one slide, which is done with the verbatim package (see code above), and another slide for the output. –  ATMathew Jul 2 '11 at 18:07 To include the output of a different, full document with own preamble you would have to include the output as PDF (or PS if you insist using non-PDF latex). I would recommend you to store the code in an external file and use the standalone class to produce a tightly cut PDF (the preview package is internally used to do this). Then simply include the PDF as graphic using \includegraphics[width=...]{filename}. You can also include the source using the listings package and \lstinputlisting. It also allows you to remove the first or last code lines if you want to remove the \documentclass[class=<realclass>]{standalone} part. The v1.0 of the standalone package allows to automatically compile sub-files as standalone documents and include them as images to the main document.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138501286506653, "perplexity": 1657.7901324502502}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703964/warc/CC-MAIN-20140313024503-00078-ip-10-183-142-35.ec2.internal.warc.gz"}
https://hal-riip.archives-ouvertes.fr/INRIA/hal-02100500v3
The relative accuracy of $(x+y)*(x-y)$ 2 ARIC - Arithmetic and Computing Inria Grenoble - Rhône-Alpes, LIP - Laboratoire de l'Informatique du Parallélisme Abstract : We consider the relative accuracy of evaluating $(x+y)(x-y)$ in IEEE floating-point arithmetic, when $x,y$ are two floating-point numbers and rounding is to nearest. This expression can be used, for example, as an efficient cancellation-free alternative to $x^2-y^2$ and (at least in the absence of underflow and overflow) is well known to have low relative error, namely, at most about $3u$ with $u$ denoting the unit roundoff. In this paper we propose to complement this traditional analysis with a finer-grained one, aimed at improving and assessing the quality of that bound. Specifically, we show that if the tie-breaking rule is to away then the bound $3u$ is asymptotically optimal (as the precision tends to $\infty$). In contrast, if the tie-breaking rule is to even, we show that asymptotically optimal bounds are now $2.25u$ for base two and $2u$ for larger bases, such as base ten. In each case, asymptotic optimality is obtained by the explicit construction of a certificate, that is, some floating-point input $(x,y)$ parametrized by $u$ and such that the error of the associated result is equivalent to the error bound as $u$ tends to zero. We conclude with comments on how $(x+y)(x-y)$ compares with $x^2$ in the presence of floating-point arithmetic, in particular showing cases where the computed value of $(x+y)(x-y)$ exceeds that of $x^2$. Keywords : Document type : Journal articles Domain : https://hal.inria.fr/hal-02100500 Contributor : Claude-Pierre Jeannerod <> Submitted on : Monday, May 17, 2021 - 2:01:29 PM Last modification on : Tuesday, May 18, 2021 - 3:33:12 AM File relative_accuracy_xy_with_appe... Files produced by the author(s) Citation Claude-Pierre Jeannerod. The relative accuracy of $(x+y)*(x-y)$. Journal of Computational and Applied Mathematics, Elsevier, 2020, pp.1-17. ⟨10.1016/j.cam.2019.112613⟩. ⟨hal-02100500v3⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532926678657532, "perplexity": 643.5308605096055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00215.warc.gz"}
http://www.conservapedia.com/index.php?title=Essay:Rebuttal_to_Counterexamples_to_Relativity&diff=prev&oldid=957963
# Difference between revisions of "Essay:Rebuttal to Counterexamples to Relativity" This is intended as an article rebutting the points in the Counterexamples to Relativity article. That article's talk page has proven to be less than satisfactory for this purpose, because it gets archived, and much of its material has degenerated into personal disputes. We believe that the two sides of the issue are better handled in two articles—this one and Counterexamples to Relativity, rather than a talk page. Unlike most essay pages, anyone is welcome to contribute. We ask that you abide by the usual guidelines—do not remove non-vandal, non-parody, non-libelous material without discussing it first on the talk page, or explaining after-the-fact for serious problems. 1: Despite wasting millions of taxpayer dollars searching for gravity waves predicted by the theory, none has ever been found. Sound like global warming? This is because the experimental capability to do so doesn't exist. It has nothing to do with global warming. 2: The eccentricity of the Moon's orbit is increasing contrary to the theory of relativity This could be a counterexample to both GR and Newtonian gravity--in both, the eccentricity is defined in terms of conserved quantities. 3: Subatomic particles have a speed observed to be faster than the speed of light, which contradicts a fundamental assumption of Relativity.[4] The Italian lab that "shocked the scientific world" has announced more precise results, confirming their previous announcement. This is an interesting observation. The world's best scientific minds are looking into it. That relativity is incorrect is not being taken seriously as a possible explanation. 4: The Pioneer anomaly. The "Pioneer anomaly" is the deviation in the motion of the Pioneer 10 and Pioneer 11 spacecraft from their predicted motion, at the distance of Saturn and beyond. It should be noted that the anomaly is about 1000 times greater than the difference between the classical Newtonian prediction and the prediction of relativity, so this is not a problem with relativity per se; it is more general than that. Calculating the force caused by heat (that is, miniscule amounts of infrared radiation) from the radioactive power source was one of the first effects that was examined. The anomaly arose when this and other known effects could not fully explain the deviation. The problem is believed to have been solved by taking into account the reflection of the radiation from the power source off of the back of the antenna dish[1]. The solution is sometimes described as an application of "Phong shading", a technique of computer graphics that is now considered imprecise. But Phong shading itself is not what is important. The "ray tracing" computer graphics technique that underlies Phong shading was what inspired the scientists to take reflection into account. 5: Anomalies in the locations of spacecraft that have flown by Earth ("flybys"). This may be another case of the Pioneer anomaly, or it may be something else. However, it is very unlikely that it shows that relativity is wrong and Newtonian mechanics is correct. 6: Spiral galaxies confound Relativity, and unseen "dark matter" has been invented to try to retrofit observations to the theory. Correct me if I'm wrong, but wasn't it due to the acceleration of various parts of galaxies that accelerated funny that led to dark matter (based on simple Newtonian dynamics)? 7: The acceleration in the expansion of the universe confounds Relativity, and unseen "dark energy" has been invented to try to retrofit observations to the theory. Uh-oh....the dark energy/cosmological constant argument....That term was added by Einstein after he discovered that his field equations () predicted that the universe was expanding, contradicting his firm philosophical belief in a static universe. So he inserted to the LHS so that it would predict a static universe. A few years later, Hubble showed the universe to be expanding, and Einstein called the cosmological constant the worst mistake of his career. So, it sort of had a bad reputation, and people didn't want to seriously consider it, until recent observations have shown the universe's expansion to be accelerating forced them to do so. It could have had a very different history. Einstein could have had that term in the EFE's from the start, and pointed out that it would determine if the universe's expansion was accelerating (or not expanding at all!) and it would take further observation to determine its value. 8: Increasingly precise measurements of the advance of the perihelion of Mercury show a shift greater than predicted by Relativity, well beyond the margin of error. A footnote goes on to say that "In a complicated or contrived series of calculations that most physics majors cannot duplicate even after learning them, the theory of general relativity's fundamental formula, , was conformed to match Mercury's then-observed precession of 5600.0 arc-seconds per century. Subsequently, however, more sophisticated technology has measured a different value of this precession (5599.7 arc-seconds per century, with a margin of error of only 0.01) ..." Considering only the anomalous precession, that is, the precession that remains after all known other factors (other planets and asteroids, solar oblateness) have been accounted for, general relativity predicts 42.98 ±0.04 arcseconds per century. Some observed values are: 43.11 ± 0.21 (Shapiro et al., 1976) 42.92 ± 0.20 (Anderson et al., 1987) 42.94 ± 0.20 (Anderson et al., 1991) 43.13 ± 0.14 (Anderson et al., 1992) [Source: Pijper 2008] These error bars, and that of the relativity formula, all overlap. The formula for mechanics under general relativity is complicated, but it is not contrived or conformed. "Conformed" suggests that it was somehow adjusted or "tweaked" to match the 42.98 figure. The formula is To begin to explain the formula, Newton's law of gravity, combining F = ma and , is In Einstein's equation, is the "stress-energy tensor", and gives the density of the Sun, taking the place of . is the "Einstein curvature tensor", and says how spacetime curves to create an apparent gravitational acceleration. There is nothing to tweak to get a value of 42.98 arcseconds. 8 is 8. is . K is Newton's constant of gravitation in both formulas. 9: The discontinuity in momentum as velocity approaches "c" for infinitesimal mass, compared to the momentum of light. The formulas for velocity, momentum, and mass can in fact be written in such a way that they appear to have discontinuities, just as the tangent function has discontinuities while the underlying sine and cosine functions do not. But they can also be written in a form that does not show discontinuities. All particles, with or without mass, can have any value of momentum. The formula for the velocity of a particle, in terms of its mass and momentum, is For a particle with mass, this means that momentum of zero gives a speed of zero, and, as the momentum approaches infinity, the speed approaches c. For a massless particle, the speed is always c. 10: The logical problem of a force which is applied at a right angle to the velocity of a relativistic mass - does this act on the rest mass or the relativistic mass? The simple answer is, unequivocally, that it acts on the 'relativistic' mass. The question seems to relate to a simple misunderstanding of Special Relativity. Einstein's theories lead to the conclusion that observers in different inertial frames of reference (i.e. observers with differing, but constant velocities relative to the thing being observed) will observe different inertial masses in the body being observed. However, there is no variance in the body's mass with regard to the direction of the force. Thus to a given observer, a force in any direction will operate on the same mass. However, to a different observer, this mass may be different, although still the constant with regard to the direction of the force. 11:The observed lack of curvature in overall space. What? Is has been observed 12: The universe shortly after its creation, when quantum effects dominated and contradicted Relativity. We're still working on a quantum theory of gravity; this isn't so much a counter-example as saying that (classical)GR isn't valid in that domain. 13: The action-at-a-distance of quantum entanglement. Special Relativity only forbids the transmission of matter, energy or information at a speed faster than light. There are plenty of other things that can move faster than light. Consider a laser on Earth which is rotating on a pivot, whose light shines onto the hull of a satellite 200,000Km away (2e8 metres). If the laser rotates at a sedentary one revolution ever four seconds, the speed of the laser beam's tip crossing the satellite's hull is 3.14e8 metres per second - faster than the speed of light. However, this is not a transfer of information. Any information is travelling from Earth to the satellite, obeying the universal speed limit. Similarly, the only information that can be transmitted by the quantum entanglement of two particles is from the originator of the particles to the two observers, not from one observer to another. Faster than light transmission of information using quantum entanglement has never been observed, nor has even conceived how such a mechanism might work.[2] 14: The action-at-a-distance by Jesus, described in John 4:46-54, Matthew 15:28, and Matthew 27:51. As an argument against relativity, there are two reasons that this is invalid (beyond simply questioning the evidential validity of the Bible): a) These passages clearly refer to a miracle. A miracle, being an act of God, is not subject to the laws of physics. b) It is highly debatable as to whether the verses do describe action-at-a-distance in the sense of an action whose influence travels instantaneously (and therefore faster than the speed of light). This itself may be argued from two viewpoints: i) When reading these passages, as with consideration of many apparent relativistic anomalies, the true picture of causality must be considered. In each case there are two apparent events. Event A - Jesus does something (says 'thy son liveth', says 'be it unto thee even as thou wilt', or Christ's spirit leaving His body). Event B - the apparent result (the son lives, her daughter is made whole, or the earth quakes). However, it is not the case in any of these examples that A causes B. Both A and B are caused by a third event. In the first two cases it is Christ's thought that causes the miracle and that causes His lips to announce the miracle. This thought would have occurred fractions of a second before either event, and is the non-instantaneous cause of both. In the last case it is Christ's death that is the precursor and cause of both events. ii) It must be considered that at the time when the Gospels were written, neither their authors nor their intended readers were aware of any concept of the speed of light and were unable to measure the billionths of a second difference between the events being considered here. Thus just as in modern parlance the phrase 'at the same moment' has a tolerance of milliseconds (unless specifically couched to mean otherwise) so do the various terms used by the Evangelists. They would never have considered it an important issue, and would therefore not have worried about the degree of precision. 15: The failure to discover gravitons, despite wasting hundreds of millions in taxpayer money in searching. Gravitons are a prediction of Quantum Theory, not of relativity, although the concept is an extension of the relativistic idea that forces take a finite time to be transmitted over a distance. However, as with the failure to detect gravity waves (see 1 above), the lack of detection is conformant with the expectation that they would be unlikely to be detected with current technology. 16: Newly observed data reveal that the fine-structure constant, α (alpha), actually varies throughout the universe, demonstrating that all inertial frames of reference do not experience identical laws of physics as claimed by Relativity Whilst this observation is unconfirmed, if true it would still not invalidate relativity. Many things may vary with position in space, and relativity does not deny this. There is no suggestion that the fine-structure constant is different at the same point in space for observers in different non-inertial frame, as the 'counterexample' implies. 17: 18: The change in mass over time of standard kilograms preserved under ideal conditions. Clearly there are a few dots that need to be joined here before this can become a coherent argument for or against anything. The best interpretation that can be put on it is that the principle of conservation of mass is being violated. However relativity, with its concept of mass/energy equivalence, holds to a more general principle of conservation of energy. Thus relativity might easily explain the observation, as keeping the standard masses in a perfect energy isolated environment is a far harder task than keeping them in a matter isolated environment (which is itself not perfectly achievable). However, until specific explanations for the variations in mass of the various standard bodies are offered, there is no foundation to any speculation as to what laws of physics are involved, let alone whether they are being violated. 19: 20: 21: 22: 23: 24: 25: 26: Relativity requires that anything traveling at the speed of light must have mass zero, so it must have momentum zero. But the laws of electrodynamics require that light have nonzero momentum. This seems to be another basic misunderstanding of relativity, from someone who gave up halfway through the textbook. Newtonian momentum (p = mv) does certainly indicate that a body with zero mass (m) must have zero momentum whatever its velocity (v). However, the relativistic equation for momentum is: where m0 is the rest mass of the object and γ is the Lorentz factor, given by where c is the speed of light. For the case of a photon, where rest mass is zero and v is equal to c, this gives p as zero divided by zero - an undetermined value. However, with the substitution of the famous E=mc2, where E is the energy of the body, the momentum equation can be rearranged to: With a photon of zero rest mass, this gives: Finally, substituting Planck's Equation for the energy of a photon where h is Planck's Constant and f is the frequency of the photon, we get the familiar (and experimentally demonstrated) value for a photon's momentum of: where is the photon's wavelength.[3] 27: Relativity requires different values for the inertia of a moving object: in its direction of motion, and perpendicular to that direction. This contradicts the logical principle that the laws of physics are the same in all directions. The rules for calculating inertia and other questions of mechanics are well known. The inertia, that is, the way that a force affects an object's momentum, is well known. Hundreds of physics textbooks discuss this in great detail, in terms of the Lorentz transform and the concepts of the force and momentum 4-vectors. The "inertia" comes from what is now called the mass, which used to be called the "rest mass". Archaic treatments formulated this in terms of the "relativistic mass", which was different. The mass is a scalar, and has no direction. The formulas for calculating the motion in terms of forces, in the direction of motion or transverse to it, are well known. 28: 29: 30: The physics and mathematics underlying the "twin paradox" are well known. That one of the twins will have had to undergo different accelerations from the other before returning to the same point is what enables them to perceive different passage of time. This does not contradict relativity, and Einstein never said that it does. His explanation in terms of different acceleration is correct. The comment about extending the length of the trip so that the acceleration would be de minimis is wrong. It seems to suggest that the acceleration could be reduced until it is negligible. It can be reduced by lengthening the trip, but it is not negligible. The Lorentz transform, and the equations of motion, are mathematically exact. The integral of a very small function over a long period is still significant. If the twins followed different paths in spacetime, which they must in order to measure different elapsed proper time, they must have undergone different accelerations, however small those differences may have been. Of course, if they never come back to the same point, they could both undergo zero acceleration. 32: 33: Based on Relativity, Einstein claimed in 1909 that the aether does not exist, but in order to make subatomic physics work right, theorists had to introduce the aether-like concept of the Higgs field, which fills all of space and breaks symmetries. Quantum field theory abounds with fields. The Higgs particle has a Higgs field. It has nothing to do with the "luminiferous aether". 34: 35: Minkowski space is predicated on the idea of four-dimensional vectors of which one component is time. However, one of the properties of a vector space is that every vector have an inverse. Time cannot be a vector because it has no inverse. Time isn't a vector. It is a component of the vector space known as "spacetime". Vectors have negatives; the word "inverse" is not typically used here. While there are thermodynamic and other reasons for not allowing time to go backwards in the real world, the mathematics of spacetime allow vectors with any components, even negative ones. 36: 37: 38: Experiments in electromagnetic induction contradict Relativity: "Einstein's Relativity ... can not explain the experiment in graph 2, in which moving magnetic field has not produced electric field." The first cited reference, from which the quote was taken, is a totally crackpot web page, from a web site that seems to specialize in hosting crackpot papers. The writing is essentially illiterate and incoherent, as in this sentence: "According to Faraday's Law it can be explained as that, duo to the magnetic flux in conductor line changing, firstly induced electromotive force dU coming from the line-winded conductor to bring out voltage, then based on differential form of Ohm's Law, the physical natural would be regarded as 'voltage before electric current' " 39: Relativity breaks down if a solenoid is traveling at or near the speed of light. The equations of electrodynamics (Maxwell's equations), and their connection with relativity, are well known. Hundreds of electrodynamics textbooks cover this subject. The equations are correct at all realizable speeds, even relativistic (near the speed of light) speeds. (Maxwell's equations are said to be the only equations from classical physics that did not need to be modified for relativity.) The equations correctly describe the behavior of magnets (this is presumably what was meant by "solenoids"), charges, and electric and magnetic fields, even at speeds near the speed of light. Of course the equations don't work at the speed of light. The cited article never discussed speeds near the speed of light, only at the speed of light. The questioner was rightly taken to task for his physically unrealizable assumption.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099239110946655, "perplexity": 639.8310876566143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00281.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?t=632
## Find an area of the triangle with the given vertices? Complex numbers, rational functions, logarithms, sequences and series, matrix operations, etc. veevee305 Posts: 1 Joined: Wed Jun 17, 2009 8:18 pm Contact: ### Find an area of the triangle with the given vertices? I missed this part of class and so now cannot understand it at all. We're working with matrices right now, so it must have to do with those. Here are the directions: Find the area of the triangle with the given vertices. The first problem: A(0,1), B(2,6), C(5,5) If someone could step me through this so I could do the other problems as well, it would be greatly appreciated. Thank you! jaybird0827 Posts: 24 Joined: Tue May 26, 2009 6:31 pm Location: NC Contact: ### Re: Find an area of the triangle with the given vertices? veevee305 wrote:I missed this part of class and so now cannot understand it at all. We're working with matrices right now, so it must have to do with those. Here are the directions: Find the area of the triangle with the given vertices. The first problem: A(0,1), B(2,6), C(5,5) If someone could step me through this so I could do the other problems as well, it would be greatly appreciated. Thank you! You don't need matrices to do this. Steps: 1.) Draw a coordinate system. 2.) Plot the three points A, B, C according to the coordinate pairs given each point. 3.) Use the distance formula to find the lengths of the three sides, AB, BC, AC. 4.) If this is a right triangle, use the ordinary formula for the area of a triangle $A\,=\,\frac{1}{2}bh$ where $b$ and $h$ are the legs of the triangle. If this is not a right triangle, then use Hero's formula to compute the area. That formula should be in your textbook. Now it's your turn. Please follow the preceding steps. If you get stuck, please post a reply showing how far you were able to get with the above. stapel_eliz Posts: 1738 Joined: Mon Dec 08, 2008 4:22 pm Contact: If you have to use matrix algebra, then note that this is actually done with the determinant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868739008903503, "perplexity": 349.6574081017522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672923.0/warc/CC-MAIN-20151001215752-00042-ip-10-137-6-227.ec2.internal.warc.gz"}
http://djconnel.blogspot.com/2013_02_01_archive.html
## Saturday, February 23, 2013 ### Chanteloup profile revised, and the unreliability of Strava profile data I previously posted a route profile for the Chanteloup climb near Paris, France. This is a climb with an amazingly rich cycling history, going back to Velocio who used it to demonstrate the superiority of multi-gear bikes to the fixed-gear bikes which dominated professional racing in the early 20th century. Later it became the sight for the "Poly de Chanteloup" event, which was contested by randonneurs and professional racers. The randonneur event unfortunately seems to have died, but the professional race continued as the Trophée des Grimpeurs, a traditional last race of the season until sponsorship was lost in 2010 and it also expired. Jan Heine, in his highly recommended book "René Herse", has the following quote about the gearing of the winning tandem in the 1949 randonneur contest. I posted this last time but I repeat it here because I find it remarkable: A single chainring was sufficient for the 14% of the climb of Chanteloup, which had to be climbed eight times. In 1950 a report listed the gearing of their tandem: a single chainring with 46 teeth and a 5-speed freewheel with 13-15-18-21-22 teeth. They never used the 22-tooth cog, and they made their attack with the 46-18. How could they ride a tandem up the climb whose profile I got from Strava? It boggled the mind. But it soon became obvious to me something was amiss. First I checked StreetView, and the hill simply didn't look as steep as advertised. That's not uncommon, but the places where the grade changed dramatically in the profile weren't reflected in the StreetView. The other factor was the profile failed to match the crude one published in Heine's book on page 175. That profile, "le profil du parcours", was obviously taken from some old race guide. The nail in the coffin was when someone on the Weight Weenies forum who'd ridden the climb said it "wasn't more than 12%". As is often the case, I'd succumbed to laziness, taking the first available data. The problem is when one defines a segment on Strava, the data from the activity used to define the segment is taken as the reference data. Unfortunately ride data varies substantially in quality. Data from position data without altimetry is interpolated by Strava onto reference altitude data. This can be quite good, except position errors, particularly from older iPhones, translate into altitude datas from slopes transverse to the road, and these transverse slopes are often considerable (grades of 100% not uncommon). A 10 meter position error could yield a 10 meter altitude error and an error of 10 meters in the difference between altitudes of two nearby points is obviously going to have a profound effect on extracted road grade. Worse, perhaps, is GPS altitude. GPS-only altitude, reported for example by the Garmin Edge 200, varies substantially in quality, and is unreliable. Barometric altimetry is better. There the unit measures the ambient pressure and temperature, converts the combination into an altitude, and reports that. This is better for short-term differences in altitude. It becomes even better if the unit uses GPS altimetry as calibration. The Garmin Edge 500 is fairly good with altitude once it has equilibriated with the outdoor temperature. Edge 800 is even better. So if I can find data from either of these units, I use that in preference to other sources. (Aside: the best of all is perhaps iBike, since that measures grade directly, which I can combine with the reported altitude to generate data which has good grade to even small length scales, but iBike data is rare and isn't found on Strava). The issue with looking at Strava segments is there is no direct indicator from where the data used to generate the profile came. Activities are tagged, but when an activity is used to define a segment, the segment is not. So better than using the segment data directly is to take data from the activities in the leaderboard. I prefer recent activities near the top of leaderboards, especially if the leaderboard is deep. Then I check to make sure the activity was recorded with a trusted unit (Edge 500, 501, 800, or 801, for example) and that the position data track the road or path nicely. That's the best approach, the approach I didn't take in my previous blog post. revised profile, using data from Strava KOM So here's a revised profile, showing as well the profile from Heine and the data I used last time. I vertically displaced the Edge 500 data to match the peak altitude of the Heine profile. On the Heine profile the first and last point were explicitly listed, but the center point was not, so I eyeballed it to match the graphic in the book. The result is the Edge 500 data falls in close agreement with the data from Heine, while the data from the Strava segment shows the climb to be much steeper, gaining more vertical. so the conclusion is the climb sustains around 4.3% for awhile, then after a brief flattening sustains 10%, then finishes with a very brief 16% or so gaining only 8 meters. 10% is a steep climb, for sure, especially on those gears, but this a big difference from the much higher numbers concluded from the Strava segment data. ## Tuesday, February 19, 2013 ### gearing on the 1949 Poly Chanteloup by the Herse tandem I've been reading Jan Heine's truly excellent book, "René Herse". Here's a quote, referring to the tandem ridden by the builder's daughter Lili and Rene Prestat in winning the 1949 Poly Chanteloup, a circuit race which included a popular climb outside of Paris: A single chainring was sufficient for the 14% of the climb of Chanteloup, which had to be climbed eight times. In 1950 a report listed the gearing of their tandem: a single chainring with 46 teeth and a 5-speed freewheel with 13-15-18-21-22 teeth. They never used the 22-tooth cog, and they made their attack with the 46-18. I figured 14%? Sure, typical French hyperbole. It's probably 14% on the inside of a switchback. And indeed, the book has a route profile which shows the climb to average only 7%. Riding a 46-18 up a 7% grade is still a considerable gear. I PR'ed Old La Honda Road, which averages 7.3%, riding a 36-18, and the tandem gear was 28% bigger. So I checked Strava. Here's the Chanteloup climb as viewed with VeloViewer, which produces excellent hill profiles from Strava data: 14%? It's more like 15% sustained, with a nice little dose of 16% to loosen up the legs. Even the portion leading up to that is steep, exceeding 8%. I simply cannot imagine climbing that on a tandem, where riders typically spin due to the synchronization challenges, with a 46-21 low gear. The profile I prepared directly from the Strava data is even meaner-looking: At least the climb is short. The portion in the segment (it continues to climb a bit more, gradually, following the segment gains 120 meters, which is only 30% of Old La Honda's altitude gain. But the riders climbed it eight times in the race. A word about these races: it is curious seeing a race where the winning bike has a front rack, full-length fenders, and even mud flaps. But those were required by the rules, as they were randonneuring events. These were in the spirit of the Low-Key Hillclimbs, where results were counted, winners were announced, but participation was at least as important as results. This was particularly important because racing at the time lacked the deep age-and-accomplishment based category system available in U.S. racing. ## Tuesday, February 5, 2013 ### second-order differentiation of speed with respect to wind speed using the chain rule I was reminded of some issues in differential calculus when working on the previous blog post. I had an equation for power as a function of speed and wind speed: p(s, sw) Since I assume that power is constant, it is an implicit equation: it defines contours in (s, sw) space of constant power. In the case of power, solving s as a function of sw may be a considerable challenge. But The Chain Rule comes to the rescue. With the Chain Rule I can write: dP = (∂ P / ∂ s) ds + (∂P / ∂ sw) dsw . But I'm assuming constant P, so dP = 0. I can then write: (∂ P / ∂ s) ds = −(∂ P / ∂ sw) dsw . And with re-arranging terms I get what I want: ds / dsw = −(∂ P / ∂ sw) / (∂ P / ∂ s) . This was all fairly simple (although that minus sign confused me when I first saw it as an undergraduate). The tricky bit for me was the second derivative. But there's no reason for it to be complicated, either. If I write the first derivative as a function r ≣ ds / dsw : d2s / d sw2 = d r / d sw . So I apply the chain rule as before with the derivative function r: dr = (∂ r / ∂ sw) dsw + (∂ r / ∂ s) ds . But what I care about is dr / dsw, so I divide all terms by dsw: d2s / d sw2 = dr / dsw = (∂ r / ∂ sw) + (∂ r / ∂ s) / r . Note the substitution ds / dsw = 1/r. This equation is all in terms of r ≣ ds / dsw, and so I can evaluate it, since I already evaluated r. So I solve the first and second derivative of speed with respect to wind speed without ever actually evaluating a closed-form solution. It took me several hours before I had clarity on this. Note the first term is a partial derivative with respect to the wind speed. The second term is a partial derivative with respect to speed multiplied by a full derivative of speed with respect to wind speed. The first term is due to, at a constant rider speed, the effect of wind speed changes on the rate of change of rider speed with respect to wind speed (this takes a bit of thought to wrap your head around). The second term is the effect of rider speed on the dependence of wind speed on rider speed multiplied by the effect on rider speed of wind speed. So it's adding two critical components, each ignoring the other, to come up with the net effect. What I posted yesterday included the first term but omitted the second term. That's more complicated. I'll fix yesterday's post when I get it right. ## Monday, February 4, 2013 ### tail/headwind effect on speed: analytic second-order evaluation I already showed my "simple" model for how an arbitrary wind affects speed. This was based in part on a numerical fit to an implicit power-speed calculation, where I found to decent approximation the logarithm of speed as a function of tailwind/headwind was well fit by a parabola. The formula I used, combining the effect of a tailwind/headwind with the effect of a crosswind, was: s' = exp[ −(sw' / 3)2 ] exp[ 2 swx'/ 3 ], where v0' is the ratio of flat-road speed with the wind to flat-road speed without the wind, sw' is the ratio of wind speed to flat-road speed without wind, and swx' is the component of that in the direction of rider travel. I already showed an analytic derivation of the cross-wind term, which is proportional to the square of sw'. I also did a first-order dependence of s' on swx'. However, to justify this full model other than numerically requires a second-order dependence of s' on swx'. So to do this I return to the power-speed model, where I assume two coefficients for power: fw is a coefficient for wind resistance, and fm is a coefficient for gravity (rolling resistance and climbing). I exclude drivetrain losses, assuming they are constant at constant power. Since I eventually constrain power to be constant, drivetrain losses are also constant, and so I am okay simply omitting it. The power-speed equation is, in terms of these variables: P' = s' [ fw (s' - sw')2 + fm] , where I assume a normalized power P' = 1 when s' = 1 and sw' = 0, yielding the constraint fw + fm = 1. For the first-order analysis I differentiated this once: dP' = d { s' [ fw (s' - sw')2 + fm] } = 0 . The differential power is zero because I assume constant power. This yields: dP' = ds' [ fw (s' - sw')2 + fm ] + 2 fw s' (s' - sw') (ds' - dsw') . I am interested in ds' / dsw', so : ds' [ 2 fw s'(s' − sw') + fw (s' - sw')2 + fm ] = dsw' 2 fw s' (s' − sw') . I can then trivially calculate the derivative: ds' / dsw' = 2 fw s' (s' − sw') / [ 2 fw s'(s' − sw') + fw (s' - sw')2 + fm ] So if I am starting on flat roads with minimal wind, then s' = 1 and sw = 0 and I get: d s' / d sw' = 2fw / (3fw + fm) . If I define f to be the fraction of power originally going into wind resistance, this becomes: d s' / d sw' = 2f / (2f + 1) But now I want to make sure I get this to second-order, as well. This is tricky (see next post) because I need to apply the chain rule properly, and I had it incorrect before I finally revised it. The result is, which I verified by comparison with numerical calculation: d2s' / dsw'2 = 2 f (4 f - 1) / (2 f + 1)3 . Here again I use f as the fraction of power, in zero wind, from wind resistance. I'll assume an exponential dependence, which was validated in the numerical fits. Exponential fits make sense, since at constant power wind can't change direction from forward to backward, speed can only approach zero, and an exponential dependence has that behavior. With the exponential dependence matching the first derivative results in a non-zero second derivative, so the second-order coefficient in the exponent needs to be evaluated to account only for the difference. When I do this I get: v0 = v00 × exp [ 2f sw / (2f + 1) v00 ] × exp [ −[f (4 f2 - 2 f + 1) / (2 f + 1)3] (sw / v00)2] This matches what I have been using assuming f = 1, or in other words that wind resistance dominates. This isn't a great approximation, but it's a good compromise for consolidation with the cross-wind term. So this gets rid of the ugly numerical fits in the wind analysis. There is a first-order dependence when there's a tail-wind or head-wind, but the analysis needs to be done to second-order on a closed circuit with constant wind because the first-order component cancels on the upwind and downwind portions, so second-order components become important. ## Sunday, February 3, 2013 ### analytic approximation to random direction wind Last time I gave up trying to solve this integral. I wanted to watch Cyclocross Worlds so got lazy and did a numerical solution and fit. But I realized while out running after the racing that I could have done much better. So I'm back for more. (1 / 2π) ∫ dφ exp[ (sw' / 3)2 ] exp[ −2 swx' / 3 ], where the integral is over the full circle and φ is the angle of the wind relative to the rider (0 = pure tail wind). The key here is to recognize that this can be well-approximated by a Gaussian for sw' to at least 1. This isn't the solution of the integral, but it's a good approximation, so once I recognize the analytic form of the solution I can get away just matching derivatives with respect to sw' = the ratio of the wind speed to the zero-wind rider speed. So my solution will be the following, where I must solve for K: exp[ (sw' / K)2 ]. I recognize that for every value of positive swx', there is a corresponding negative value, and so I can combine them in the integral and then integrate over the half-circle for which swx' are positive (from −π/2 to +π/2): (1 / π) ∫ dφ exp[ (sw' / 3)2 ] cosh[ 2 swx' / 3 ]. I can then do a low-order expansion on these exponentials to generate a linear equation: exp(x) ≈ 1 + x, and cos(x) ≈ 1 + x2/2. The integral thus becomes the following approximation: (1 / π) ∫ dφ [1 + (sw' / 3)2] [ 1 + 2 (swx' / 3)2 ]. Now I plug in the formula for the tail wind component swx': (1 / π) ∫ dφ [1 + sw' / 3)2] [ 1 + 2 (sw' / 3)2 sin2φ ]. I can omit terms proportional to sw'4: (1 / π) ∫ dφ [1 + (1 + 2 sin2φ) (sw' / 3)2]. Recognizing the average value of sin2φ = 1/2, the integral is trivial: 1 + 2 (sw' / 3)2 . The low-order expansion of the Gaussian approximation is: 1 + (sw' / K)2. So I conclude K = sqrt(9/2). This is essentially what I got numerically. So written more elegantly, the approximation to the average speed of a rider slowed down by a constant wind coming from wind distributed evenly from all possible directions during a closed course is: <v0> = v00 exp[− 2 (sw / 3 v00)2], where <v0> is the average flat-road speed and v00 is the zero-wind flat-road speed. The interesting thing about this result is the average speed reduction, including the headwind and tailwind sections, is twice the speed reduction in the cross-wind sections, for relatively small speed reductions. So if the cross-winds slow me down by 1%, over the whole course I'll be slowed around 2%, considering the effects of the head-winds, tail-winds, and partial cross-winds. It's interesting to compare this to the result for an out-and-back with direct head and tail wind. There the average time per unit distance is scaled by cosh[ 2 sw' / 3 ] exp[ (sw' / 3)2 ] ≈ exp[3 (sw' / 3)2]. So the result in wind equally distributed over all angles equally is 2/3 as much as that from a pure headwind plus a pure tailwind. ## Saturday, February 2, 2013 ### attempt at calculating effect of random-direction wind Last time, I discussed a simple approximation which comes fairly close to predicting the speed at constant power in an arbitrary wind of reasonable magnitude. The formula can be written as a normalized speed s' (normalized to no wind) and a normalized wind sw' (normalized to the same scale) as follows: s' = exp[ −(sw' / 3)2 ] exp[ 2 swx'/ 3 ], where swx' is the normalized speed of the wind in the direction of the rider (positive for tailwind, negative for headwind). For a typical closed circuit, the rider will head in random directions relative to the wind. If the wind is constant than the average distance-weighted direction is zero: the rider starts where he begins. Last time I calculated a result assuming a square course aligned with the wind. That's an over-simplification, even if it's probably representative. But I'd prefer to consider the more generally applicable case of all directions equally likely. So I assume the rider goes in a big circle. I can then integrate 1/s' around the circle. 1/s' is the time taken per unit distance for a given point on the course. The equation as follows: (1 / 2π) ∫ dφ exp[ (sw' / 3)2 ] exp[ −2 swx' / 3 ], where the integral goes from 0 to 2π. Note that swx' is a function of φ so I need to convert one to the other to evaluate the integral. I want to avoid the exponential of a cosine (transcendental of transcendental) so I choose to get rid of φ: d swx' = sw' d cos φ = −sw' sin φ dφ. I convert the differential dφ: dφ = dz / sin φ, where I define a temporary variable: z ≣ swx' / sw' = cos φ. I can get rid of the sin φ using: sin φ = sqrt(1 - z2) The integral becomes: (1 / π) exp[ (sw' / 3)2 ] ∫ dz exp[ 2 sw' z / 3 ] / sqrt(1 - z2) where z goes from −1 to +1, and where I made a few trivial simplifications. Here I'm stuck. It looks like it shouldn't be hard, but I simply don't know what to do next. It obviously converges despite going infinite at the endpoints of integration: for z = 1 - δ where δ is small and positive, 1 / sqrt(1 − z2) = 1 / sqrt(1 − (1 − δ)2) ≈ 1 / sqrt(1 − (1 − 2δ)) = 1 / sqrt(2δ), and the integral of that with respect to δ is sqrt(2δ), which goes to zero for δ from 0 to a small positive limit. So that's not a problem. But I was hoping for a simple analytic solution. Back in the day, that would be the end of it unless I could come up with some clever approximation. But instead I quickly succumb to the temptation of numerical evaluation and fit so I can watch the cyclocross world championship web feed. The result is in the following plot: I did the fit using only the function from −0.5 to +0.5, since winds outside this range at ground level would be rare. Despite this, the function extrapolates nicely out to my plotted range of −2 to +2. Converting from time to average speed, and going from normalized speeds back to actual speeds for rider and wind, the fit is: <v0> = v00 exp[ − (sw / 2.12 v00)2 ] Here's the same plot with log-log axes, showing the time delay rather than time: The result is a wind equal to half the zero-wind rider speed delays the rider around the circular course by approximately 5%. ## Friday, February 1, 2013 ### adding wind to heuristic bike-speed model The motivation for the preceding analysis was to add wind effects to my heuristic speed model. The philosophy of the heuristic speed model is to not rely on a constant power approximation, but try to model cyclist behavior directly. So how do riders behave when faced with a wind? When I first started riding with a heart rate monitor, an early lesson was my heart rate dropped when I was riding into the wind. This was obviously psychological: the wind was defeating me and I lost the motivation to pedal hard. Then I moved from the San Francisco Bay area to Austin where I lived for three years and winds were a predictable part of every ride, while extended hills were essentially gone. Instead of challenging myself on long hills, I learned to use treat the headwinds as a challenge rather than bad luck. As a result, I began increasing my heartrate, rather than decreasing it, when I encountered headwinds. With groups it's different: the group may be motivated to hammer into the wind, or it may be that nobody wants to pull and the speed drops. Crosswinds are another matter: gusting crosswinds tend to reduce power because the rider needs to focus more on controlling the bike. It's hard to get into a rhythm with heavy cross-winds. Groups lose their coherence because riders get ridden into the gutter. So lacking insight into behavior in winds, and with data on ground-level wind speed so much less reliable than climbing data, I'll just assume people ride constant power. Using a simple power-speed model with "reasonable" coefficients I derived for a headwind, on flat ground: v0 = v00 exp[0.62 sw / v0 − (0.3 sw / v0)2],   tail/head-wind where v00 refers to the level-ground speed in zero wind, and v0 is the level-ground speed at the given wind speed (measured in the same direction as the rider) sw. For the cross-wind, the fit yielded: v0 = v00 exp[−(0.37 sw / v0)2].   cross-wind I can simplify this and combine them to arbitraty wind direction with the following, using sw to represent the total wind speed and swx to represent the wind in direction of travel: v0 = v00 exp[2 swx /3 v0 − (sw / 3v0)2].   arbitrary wind Note the square term applies to the absolute wind speed indpendent of direction, while the other term applies to the component of the wind speed in the rider direction. There is no explicit cross-wind term. The nice round numbers are justified since wind is never really constant anyway, either across the rider's body or over time, so higher prevision would be wasted. The v0 in this equation is then a revised value to use for my heuristic speed model. To test this, I constructed data for a rider traveling a 5 km radius circle at 8 mps nominal speed on the flat with a 3 mps wind from the north. The speed model with the wind adjustment predicted a speed varying from 5.15 mps into the wind (64%) to 9.11 mps with the tailwind (114%). Power varied from 112 watts into the wind to 126 watts with the tailwind. So that's a +/- 6% variation about the mean power for a variation in speed from -36% to +14%. So the rider maintained much closer to constant power than he would have with a constant wind-independent speed. It's interesting to ask what this model predicts for the effect of constant wind for a closed circuit. As a test case consider a square course aligned with the constant wind direction. First, the 2nd-order term is constant: time is increased by a factor exp[(sw / 3v00)2]. But then there's the first order term. That aids on the headwind leg, but then reduces time on the tailwind section. The net effect on time is (1 + cosh[ 2 sw / 3 v00 ] ) / 2 (hyperbolic cosine). So the result is for a square course aligned with the wind the wind will increase time by a factor (neglecting fatigue): exp[(sw / 3v00)2] ( 1 + cosh[ 2 sw / 3 v00 ] ) / 2 For a circular course the result will be a bit different, the cosh term replaced by something similar, the exponential term the same.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8620351552963257, "perplexity": 2326.646481109403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268533.31/warc/CC-MAIN-20140728011748-00041-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.educator.com/mathematics/multivariable-calculus/hovasapian/stokes'-theorem-part-1.php
× Start learning today, and be successful in your academic & professional career. Start Today! Raffi Hovasapian Stokes' Theorem, Part 1 Slide Duration: Section 1: Vectors Points & Vectors 28m 23s Intro 0:00 Points and Vectors 1:02 A Point in a Plane 1:03 A Point in Space 3:14 Notation for a Space of a Given Space 6:34 Introduction to Vectors 9:51 14:51 Example 1 16:52 18:24 Example 2 21:01 Two More Properties of Vector Addition 24:16 Multiplication of a Vector by a Constant 25:27 Scalar Product & Norm 30m 25s Intro 0:00 Scalar Product and Norm 1:05 Introduction to Scalar Product 1:06 Example 1 3:21 Properties of Scalar Product 6:14 Definition: Orthogonal 11:41 Example 2: Orthogonal 14:19 Definition: Norm of a Vector 15:30 Example 3 19:37 Distance Between Two Vectors 22:05 Example 4 27:19 More on Vectors & Norms 38m 18s Intro 0:00 More on Vectors and Norms 0:38 Open Disc 0:39 Close Disc 3:14 Open Ball, Closed Ball, and the Sphere 5:22 Property and Definition of Unit Vector 7:16 Example 1 14:04 Three Special Unit Vectors 17:24 General Pythagorean Theorem 19:44 Projection 23:00 Example 2 28:35 Example 3 35:54 Inequalities & Parametric Lines 33m 19s Intro 0:00 Inequalities and Parametric Lines 0:30 Starting Example 0:31 Theorem 1 5:10 Theorem 2 7:22 Definition 1: Parametric Equation of a Straight Line 10:16 Definition 2 17:38 Example 1 21:19 Example 2 25:20 Planes 29m 59s Intro 0:00 Planes 0:18 Definition 1 0:19 Example 1 7:04 Example 2 12:45 General Definitions and Properties: 2 Vectors are Said to Be Paralleled If 14:50 Example 3 16:44 Example 4 20:17 More on Planes 34m 18s Intro 0:00 More on Planes 0:25 Example 1 0:26 Distance From Some Point in Space to a Given Plane: Derivation 10:12 Final Formula for Distance 21:20 Example 2 23:09 Example 3: Part 1 26:56 Example 3: Part 2 31:46 Section 2: Differentiation of Vectors Maps, Curves & Parameterizations 29m 48s Intro 0:00 Maps, Curves and Parameterizations 1:10 Recall 1:11 Looking at y = x2 or f(x) = x2 2:23 Departure Space & Arrival Space 7:01 Looking at a 'Function' from ℝ to ℝ2 10:36 Example 1 14:50 Definition 1: Parameterized Curve 17:33 Example 2 21:56 Example 3 25:16 Differentiation of Vectors 39m 40s Intro 0:00 Differentiation of Vectors 0:18 Example 1 0:19 Definition 1: Velocity of a Curve 1:45 Line Tangent to a Curve 6:10 Example 2 7:40 Definition 2: Speed of a Curve 12:18 Example 3 13:53 Definition 3: Acceleration Vector 16:37 Two Definitions for the Scalar Part of Acceleration 17:22 Rules for Differentiating Vectors: 1 19:52 Rules for Differentiating Vectors: 2 21:28 Rules for Differentiating Vectors: 3 22:03 Rules for Differentiating Vectors: 4 24:14 Example 4 26:57 Section 3: Functions of Several Variables Functions of Several Variable 29m 31s Intro 0:00 Length of a Curve in Space 0:25 Definition 1: Length of a Curve in Space 0:26 Extended Form 2:06 Example 1 3:40 Example 2 6:28 Functions of Several Variable 8:55 Functions of Several Variable 8:56 General Examples 11:11 Graph by Plotting 13:00 Example 1 16:31 Definition 1 18:33 Example 2 22:15 Equipotential Surfaces 25:27 Isothermal Surfaces 27:30 Partial Derivatives 23m 31s Intro 0:00 Partial Derivatives 0:19 Example 1 0:20 Example 2 5:30 Example 3 7:48 Example 4 9:19 Definition 1 12:19 Example 5 14:24 Example 6 16:14 20:26 Higher and Mixed Partial Derivatives 30m 48s Intro 0:00 Higher and Mixed Partial Derivatives 0:45 Definition 1: Open Set 0:46 Notation: Partial Derivatives 5:39 Example 1 12:00 Theorem 1 14:25 Now Consider a Function of Three Variables 16:50 Example 2 20:09 Caution 23:16 Example 3 25:42 Section 4: Chain Rule and The Gradient The Chain Rule 28m 3s Intro 0:00 The Chain Rule 0:45 Conceptual Example 0:46 Example 1 5:10 The Chain Rule 10:11 Example 2: Part 1 19:06 Example 2: Part 2 - Solving Directly 25:26 Tangent Plane 42m 25s Intro 0:00 Tangent Plane 1:02 Tangent Plane Part 1 1:03 Tangent Plane Part 2 10:00 Tangent Plane Part 3 18:18 Tangent Plane Part 4 21:18 Definition 1: Tangent Plane to a Surface 27:46 Example 1: Find the Equation of the Plane Tangent to the Surface 31:18 Example 2: Find the Tangent Line to the Curve 36:54 Further Examples with Gradients & Tangents 47m 11s Intro 0:00 Example 1: Parametric Equation for the Line Tangent to the Curve of Two Intersecting Surfaces 0:41 Part 1: Question 0:42 Part 2: When Two Surfaces in ℝ3 Intersect 4:31 Part 3: Diagrams 7:36 Part 4: Solution 12:10 Part 5: Diagram of Final Answer 23:52 Example 2: Gradients & Composite Functions 26:42 Part 1: Question 26:43 Part 2: Solution 29:21 Example 3: Cos of the Angle Between the Surfaces 39:20 Part 1: Question 39:21 Part 2: Definition of Angle Between Two Surfaces 41:04 Part 3: Solution 42:39 Directional Derivative 41m 22s Intro 0:00 Directional Derivative 0:10 Rate of Change & Direction Overview 0:11 Rate of Change : Function of Two Variables 4:32 Directional Derivative 10:13 Example 1 18:26 Examining Gradient of f(p) ∙ A When A is a Unit Vector 25:30 Directional Derivative of f(p) 31:03 33:23 Example 2 34:53 A Unified View of Derivatives for Mappings 39m 41s Intro 0:00 A Unified View of Derivatives for Mappings 1:29 Derivatives for Mappings 1:30 Example 1 5:46 Example 2 8:25 Example 3 12:08 Example 4 14:35 Derivative for Mappings of Composite Function 17:47 Example 5 22:15 Example 6 28:42 Section 5: Maxima and Minima Maxima & Minima 36m 41s Intro 0:00 Maxima and Minima 0:35 Definition 1: Critical Point 0:36 Example 1: Find the Critical Values 2:48 Definition 2: Local Max & Local Min 10:03 Theorem 1 14:10 Example 2: Local Max, Min, and Extreme 18:28 Definition 3: Boundary Point 27:00 Definition 4: Closed Set 29:50 Definition 5: Bounded Set 31:32 Theorem 2 33:34 Further Examples with Extrema 32m 48s Intro 0:00 Further Example with Extrema 1:02 Example 1: Max and Min Values of f on the Square 1:03 Example 2: Find the Extreme for f(x,y) = x² + 2y² - x 10:44 Example 3: Max and Min Value of f(x,y) = (x²+ y²)⁻¹ in the Region (x -2)²+ y² ≤ 1 17:20 Lagrange Multipliers 32m 32s Intro 0:00 Lagrange Multipliers 1:13 Theorem 1 1:14 Method 6:35 Example 1: Find the Largest and Smallest Values that f Achieves Subject to g 9:14 Example 2: Find the Max & Min Values of f(x,y)= 3x + 4y on the Circle x² + y² = 1 22:18 More Lagrange Multiplier Examples 27m 42s Intro 0:00 Example 1: Find the Point on the Surface z² -xy = 1 Closet to the Origin 0:54 Part 1 0:55 Part 2 7:37 Part 3 10:44 Example 2: Find the Max & Min of f(x,y) = x² + 2y - x on the Closed Disc of Radius 1 Centered at the Origin 16:05 Part 1 16:06 Part 2 19:33 Part 3 23:17 Lagrange Multipliers, Continued 31m 47s Intro 0:00 Lagrange Multipliers 0:42 First Example of Lesson 20 0:44 Let's Look at This Geometrically 3:12 Example 1: Lagrange Multiplier Problem with 2 Constraints 8:42 Part 1: Question 8:43 Part 2: What We Have to Solve 15:13 Part 3: Case 1 20:49 Part 4: Case 2 22:59 Part 5: Final Solution 25:45 Section 6: Line Integrals and Potential Functions Line Integrals 36m 8s Intro 0:00 Line Integrals 0:18 Introduction to Line Integrals 0:19 Definition 1: Vector Field 3:57 Example 1 5:46 Example 2: Gradient Operator & Vector Field 8:06 Example 3 12:19 Vector Field, Curve in Space & Line Integrals 14:07 Definition 2: F(C(t)) ∙ C'(t) is a Function of t 17:45 Example 4 18:10 Definition 3: Line Integrals 20:21 Example 5 25:00 Example 6 30:33 More on Line Integrals 28m 4s Intro 0:00 More on Line Integrals 0:10 Line Integrals Notation 0:11 Curve Given in Non-parameterized Way: In General 4:34 Curve Given in Non-parameterized Way: For the Circle of Radius r 6:07 Curve Given in Non-parameterized Way: For a Straight Line Segment Between P & Q 6:32 The Integral is Independent of the Parameterization Chosen 7:17 Example 1: Find the Integral on the Ellipse Centered at the Origin 9:18 Example 2: Find the Integral of the Vector Field 16:26 Discussion of Result and Vector Field for Example 2 23:52 Graphical Example 26:03 Line Integrals, Part 3 29m 30s Intro 0:00 Line Integrals 0:12 Piecewise Continuous Path 0:13 Closed Path 1:47 Example 1: Find the Integral 3:50 The Reverse Path 14:14 Theorem 1 16:18 Parameterization for the Reverse Path 17:24 Example 2 18:50 Line Integrals of Functions on ℝn 21:36 Example 3 24:20 Potential Functions 40m 19s Intro 0:00 Potential Functions 0:08 Definition 1: Potential Functions 0:09 Definition 2: An Open Set S is Called Connected if… 5:52 Theorem 1 8:19 Existence of a Potential Function 11:04 Theorem 2 18:06 Example 1 22:18 Contrapositive and Positive Form of the Theorem 28:02 The Converse is Not Generally True 30:59 Our Theorem 32:55 Compare the n-th Term Test for Divergence of an Infinite Series 36:00 So for Our Theorem 38:16 Potential Functions, Continued 31m 45s Intro 0:00 Potential Functions 0:52 Theorem 1 0:53 Example 1 4:00 Theorem in 3-Space 14:07 Example 2 17:53 Example 3 24:07 Potential Functions, Conclusion & Summary 28m 22s Intro 0:00 Potential Functions 0:16 Theorem 1 0:17 In Other Words 3:25 Corollary 5:22 Example 1 7:45 Theorem 2 11:34 Summary on Potential Functions 1 15:32 Summary on Potential Functions 2 17:26 Summary on Potential Functions 3 18:43 Case 1 19:24 Case 2 20:48 Case 3 21:35 Example 2 23:59 Section 7: Double Integrals Double Integrals 29m 46s Intro 0:00 Double Integrals 0:52 Introduction to Double Integrals 0:53 Function with Two Variables 3:39 Example 1: Find the Integral of xy³ over the Region x ϵ[1,2] & y ϵ[4,6] 9:42 Example 2: f(x,y) = x²y & R be the Region Such That x ϵ[2,3] & x² ≤ y ≤ x³ 15:07 Example 3: f(x,y) = 4xy over the Region Bounded by y= 0, y= x, and y= -x+3 19:20 Polar Coordinates 36m 17s Intro 0:00 Polar Coordinates 0:50 Polar Coordinates 0:51 Example 1: Let (x,y) = (6,√6), Convert to Polar Coordinates 3:24 Example 2: Express the Circle (x-2)² + y² = 4 in Polar Form. 5:46 Graphing Function in Polar Form. 10:02 Converting a Region in the xy-plane to Polar Coordinates 14:14 Example 3: Find the Integral over the Region Bounded by the Semicircle 20:06 Example 4: Find the Integral over the Region 27:57 Example 5: Find the Integral of f(x,y) = x² over the Region Contained by r= 1 - cosθ 32:55 Green's Theorem 38m 1s Intro 0:00 Green's Theorem 0:38 Introduction to Green's Theorem and Notations 0:39 Green's Theorem 3:17 Example 1: Find the Integral of the Vector Field around the Ellipse 8:30 Verifying Green's Theorem with Example 1 15:35 A More General Version of Green's Theorem 20:03 Example 2 22:59 Example 3 26:30 Example 4 32:05 Divergence & Curl of a Vector Field 37m 16s Intro 0:00 Divergence & Curl of a Vector Field 0:18 Definitions: Divergence(F) & Curl(F) 0:19 Example 1: Evaluate Divergence(F) and Curl(F) 3:43 Properties of Divergence 9:24 Properties of Curl 12:24 Two Versions of Green's Theorem: Circulation - Curl 17:46 Two Versions of Green's Theorem: Flux Divergence 19:09 Circulation-Curl Part 1 20:08 Circulation-Curl Part 2 28:29 Example 2 32:06 Divergence & Curl, Continued 33m 7s Intro 0:00 Divergence & Curl, Continued 0:24 Divergence Part 1 0:25 Divergence Part 2: Right Normal Vector and Left Normal Vector 5:28 Divergence Part 3 9:09 Divergence Part 4 13:51 Divergence Part 5 19:19 Example 1 23:40 Final Comments on Divergence & Curl 16m 49s Intro 0:00 Final Comments on Divergence and Curl 0:37 Several Symbolic Representations for Green's Theorem 0:38 Circulation-Curl 9:44 Flux Divergence 11:02 Closing Comments on Divergence and Curl 15:04 Section 8: Triple Integrals Triple Integrals 27m 24s Intro 0:00 Triple Integrals 0:21 Example 1 2:01 Example 2 9:42 Example 3 15:25 Example 4 20:54 Cylindrical & Spherical Coordinates 35m 33s Intro 0:00 Cylindrical and Spherical Coordinates 0:42 Cylindrical Coordinates 0:43 When Integrating Over a Region in 3-space, Upon Transformation the Triple Integral Becomes.. 4:29 Example 1 6:27 The Cartesian Integral 15:00 Introduction to Spherical Coordinates 19:44 Reason It's Called Spherical Coordinates 22:49 Spherical Transformation 26:12 Example 2 29:23 Section 9: Surface Integrals and Stokes' Theorem Parameterizing Surfaces & Cross Product 41m 29s Intro 0:00 Parameterizing Surfaces 0:40 Describing a Line or a Curve Parametrically 0:41 Describing a Line or a Curve Parametrically: Example 1:52 Describing a Surface Parametrically 2:58 Describing a Surface Parametrically: Example 5:30 Recall: Parameterizations are not Unique 7:18 Example 1: Sphere of Radius R 8:22 Example 2: Another P for the Sphere of Radius R 10:52 This is True in General 13:35 Example 3: Paraboloid 15:05 Example 4: A Surface of Revolution around z-axis 18:10 Cross Product 23:15 Defining Cross Product 23:16 Example 5: Part 1 28:04 Example 5: Part 2 - Right Hand Rule 32:31 Example 6 37:20 Tangent Plane & Normal Vector to a Surface 37m 6s Intro 0:00 Tangent Plane and Normal Vector to a Surface 0:35 Tangent Plane and Normal Vector to a Surface Part 1 0:36 Tangent Plane and Normal Vector to a Surface Part 2 5:22 Tangent Plane and Normal Vector to a Surface Part 3 13:42 Example 1: Question & Solution 17:59 Example 1: Illustrative Explanation of the Solution 28:37 Example 2: Question & Solution 30:55 Example 2: Illustrative Explanation of the Solution 35:10 Surface Area 32m 48s Intro 0:00 Surface Area 0:27 Introduction to Surface Area 0:28 Given a Surface in 3-space and a Parameterization P 3:31 Defining Surface Area 7:46 Curve Length 10:52 Example 1: Find the Are of a Sphere of Radius R 15:03 Example 2: Find the Area of the Paraboloid z= x² + y² for 0 ≤ z ≤ 5 19:10 Example 2: Writing the Answer in Polar Coordinates 28:07 Surface Integrals 46m 52s Intro 0:00 Surface Integrals 0:25 Introduction to Surface Integrals 0:26 General Integral for Surface Are of Any Parameterization 3:03 Integral of a Function Over a Surface 4:47 Example 1 9:53 Integral of a Vector Field Over a Surface 17:20 Example 2 22:15 Side Note: Be Very Careful 28:58 Example 3 30:42 Summary 43:57 Divergence & Curl in 3-Space 23m 40s Intro 0:00 Divergence and Curl in 3-Space 0:26 Introduction to Divergence and Curl in 3-Space 0:27 Define: Divergence of F 2:50 Define: Curl of F 4:12 The Del Operator 6:25 Symbolically: Div(F) 9:03 Symbolically: Curl(F) 10:50 Example 1 14:07 Example 2 18:01 Divergence Theorem in 3-Space 34m 12s Intro 0:00 Divergence Theorem in 3-Space 0:36 Green's Flux-Divergence 0:37 Divergence Theorem in 3-Space 3:34 Note: Closed Surface 6:43 Figure: Paraboloid 8:44 Example 1 12:13 Example 2 18:50 Recap for Surfaces: Introduction 27:50 Recap for Surfaces: Surface Area 29:16 Recap for Surfaces: Surface Integral of a Function 29:50 Recap for Surfaces: Surface Integral of a Vector Field 30:39 Recap for Surfaces: Divergence Theorem 32:32 Stokes' Theorem, Part 1 22m 1s Intro 0:00 Stokes' Theorem 0:25 Recall Circulation-Curl Version of Green's Theorem 0:26 Constructing a Surface in 3-Space 2:26 Stokes' Theorem 5:34 Note on Curve and Vector Field in 3-Space 9:50 Example 1: Find the Circulation of F around the Curve 12:40 Part 1: Question 12:48 Part 2: Drawing the Figure 13:56 Part 3: Solution 16:08 Stokes' Theorem, Part 2 20m 32s Intro 0:00 Example 1: Calculate the Boundary of the Surface and the Circulation of F around this Boundary 0:30 Part 1: Question 0:31 Part 2: Drawing the Figure 2:02 Part 3: Solution 5:24 Example 2: Calculate the Boundary of the Surface and the Circulation of F around this Boundary 13:11 Part 1: Question 13:12 Part 2: Solution 13:56 • ## Related Books 1 answerLast reply by: Professor HovasapianSun Jan 20, 2013 11:01 PMPost by mateusz marciniak on January 20, 2013my cross product came out different. mine came out to be cos pheta + sin pheta + 0, and for the curl(F) i got 1, -1, 1. it could be my algebra mistake. would that still be considered a constant vector but the second term would be flipped because of the negative one? ### Stokes' Theorem, Part 1 Let F(x,y,z) be a vector field such that F = (z,y, − x) and P(t,u) be a parametrization such that P = (u,t,u + t). Find curl(F)oP • To find the composition curl(F)oP we first compute curl(F) = ∇×F. • Then ∇×F = 0i + 2j + 0k or (0,2,0). So curl(F)oP = (0,2,0)o(u,t,u + t) = (0,2,0). Note that curl(F) is a constant vector. Let F(x,y,z) be a vector field such that F = (x2 + y − z,x + y2 − z,x + y − z2) and P(t,u) be a parametrization such that P = (u,t,u − t). Find curl(F)oP • To find the composition curl(F)oP we first compute curl(F) = ∇×F. • Then ∇×F = 2i − 2j + 0k or (2, − 2,0). So curl(F)oP = (2, − 2,0)o(u,t,u − t) = (2, − 2,0). Note that curl(F) is a constant vector. Let F(x,y,z) be a vector field such that F = (xey,yez,zex) and P(t,u) be a parametrization such that P = (u,t,1). Find curl(F)oP • To find the composition curl(F)oP we first compute curl(F) = ∇×F. • Then ∇×F = − yezi − zexj − xeyk or ( − yez, − zex, − xey). So curl(F)oP = ( − yez, − zex, − xey)o(u,t,1) = ( − te, − eu, − uet). Let F(x,y,z) be a vector field such that F = ( − x, − y,1) and P(f,q) be a parametrization such that P = (cosq,sinf,q + f). Find [ curl(F)oP ] ×N • To find the scalar product [ curl(F)oP ] ×N we first find curl(F)oP and N. • Now, curl(F) = ∇×F = 0i − 0j + 0k or (0,0,0). • So curl(F)oP = (0,0,0)o(cosq,sinf,q + f) = (0,0,0). • Recall that N = [dP/df] ×[dP/dq]. We have [dP/df] = (0,cosf,1) and [dP/dq] = ( − sinq,0,1) • Then N = (0,cosf,1) ×( − sinq,0,1) = cosfi − sinqj + cosfsinqk or (cosf, − sinq,cosfsinq) Hence [ curl(F)oP ] ×N = (0,0,0) ×(cosf, − sinq,cosfsinq) = 0 + 0 + 0 = 0. Note that if curl(F) = (0,0,0) then [ curl(F)oP ] ×N = 0. Let F(x,y,z) be a vector field such that F = (x,y,x + y) and P(f,q) be a parametrization such that P = (sinf,cosq,q − f). Find [ curl(F)oP ] ×N • To find the scalar product [ curl(F)oP ] ×N we first find curl(F)oP and N. • Now, curl(F) = ∇×F = 1i − 1j + 0k or (1, − 1,0). • So curl(F)oP = (1, − 1,0)o(sinf,cosq,q − f) = (1, − 1,0). • Recall that N = [dP/df] ×[dP/dq]. We have [dP/df] = (cosf,0, − 1) and [dP/dq] = (0, − sinq,1) • Then N = (cosf,0, − 1) ×(0, − sinq,1) = sinqi − cosfj − cosfsinqk or (sinq, − cosf, − cosfsinq) Hence [ curl(F)oP ] ×N = (1, − 1,0) ×(sinq, − cosf, − cosfsinq) = sinq + cosf Find a parametrization P(t,u) for 4x − 3y + 2z = 3 and compute N = [dP/dt] ×[dP/du]. • To parametrize 4x − 3y + 2z = 3 we let x = t, y = u and z = f(x,y). Solving for z gives z = [3/2] − 2x + [3/2]y and so P(t,u) = ( t,u,[3/2] − 2t + [3/2]u ). Now, [dP/dt] = (1,0, − 2) and [dP/du] = ( 0,1,[3/2] ) so N = [dP/dt] ×[dP/du] = 2i − [3/2]j + k or ( 2, − [3/2],1 ) Find a parametrization P(t,u) for z = 3 − x2 − y2 and compute N = [dP/dt] ×[dP/du]. • To parametrize z = 3 − x2 − y2 we let x = t, y = u and z = f(x,y) and so P(t,u) = (t,u,3 − t2 − u2). Now, [dP/dt] = (1,0, − 2t) and [dP/du] = (0,1, − 2u) so N = [dP/dt] ×[dP/du] = 2ti + 2tj + k or (2t,2u,1) Find the circulation of the vector field F(x,y,z) around the curve P(t,u) given that F = (x,y,x + y + z) and P = ( [t/2],[u/2],t + u ) for 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1. • We can use Stoke's Theorem to find the circulation of a vector field F around a curve P, that is computing dtdu where S is the surface of the region P faces counterclockwise. • To integrate ∫0101 [ curl(F)oP ] ×N dtdu we first compute [ curl(F)oP ] ×N. • First, curl(F) = ∇×F = 1i − 1j + 0k or (1, − 1,0), so curl(F)oP = (1, − 1,0)o( [t/2],[u/2],t + u ) = (1, − 1,0). • Second, N = ( [1/2],0,1 ) ×( 0,[1/2],1 ) = − [1/2]i − [1/2]j + [1/4]k or ( − [1/2], − [1/2],[1/4] ) Hence [ curl(F)oP ] ×N = (1, − 1,0) ×( − [1/2], − [1/2],[1/4] ) = 0 and ∫0101 [ curl(F)oP ] ×N dtdu = 0 Find the circulation of the vector field F(x,y,z) around the curve P(t,u) given that F = (z + y,z − x,x + y) and P = ( u2,t2,1 ) for 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1. • We can use Stoke's Theorem to find the circulation of a vector field F around a curve P, that is computing dtdu where S is the surface of the region P faces counterclockwise. • To integrate ∫0101 [ curl(F)oP ] ×N dtdu we first compute [ curl(F)oP ] ×N. • First, curl(F) = ∇×F = 0i − 1j + 0k or (0, − 1,0), so curl(F)oP = (0, − 1,0)o( u2,t2,1 ) = (0, − 1,0). • Second, N = ( 0,2t,0 ) ×( 2u,0,0 ) = 0i − 0j − 4tuk or ( 0,0, − 4tu ) • Hence [ curl(F)oP ] ×N = (0, − 1,0) ×( 0,0, − 4tu ) = 0. We now integrate ∫0101 [ curl(F)oP ] ×N dtdu = ∫0101 0 dtdu = 0 Find the circulation of the vector field F(x,y,z) around the curve P(t,u) given that F = ( − x, − y,xyz) and P = ( [1/t],[1/u],t − u ) for 0.5 ≤ t ≤ 1 and 0.5 ≤ u ≤ 1. • We can use Stoke's Theorem to find the circulation of a vector field F around a curve P, that is computing dtdu where S is the surface of the region P faces counterclockwise. • To integrate ∫0.510.51 [ curl(F)oP ] ×N dtdu we first compute [ curl(F)oP ] ×N. • First, curl(F) = ∇×F = xzi − yzj + 0k or (xz, − yz,0), so curl(F)oP = (xz, − yz,0)o( [1/t],[1/u],t − u ) = ( 1 − [u/t],1 − [t/u],0 ). • Second, N = ( − [1/(t2)],0,1 ) ×( 0, − [1/(u2)], − 1 ) = − [1/(u2)]i − [1/(t2)]j + [1/(t2u2)]k or ( − [1/(u2)], − [1/(t2)],[1/(t2u2)] ) • Hence [ curl(F)oP ] ×N = ( 1 − [u/t],1 − [t/u],0 ) ×( − [1/(u2)], − [1/(t2)],[1/(t2u2)] ) = − [1/(t2)] + [2/tu] − [1/(u2)]. • We now integrate ∫0.510.51 [ curl(F)oP ] ×N dtdu = ∫0.510.51 ( − [1/(t2)] + [2/tu] − [1/(u2)] ) dtdu Thus ∫0.510.51 ( − [1/(t2)] + [2/tu] − [1/(u2)] ) dtdu = ∫0.51 ( − 1 − [1/(2u2)] − [2/u]ln( [1/2] ) ) du = − 1 − 2[ ln(0.5) ]2 *These practice questions are only helpful when you work on them offline on a piece of paper and then use the solution steps function to check your answer. ### Stokes' Theorem, Part 1 Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. • Intro 0:00 • Stokes' Theorem 0:25 • Recall Circulation-Curl Version of Green's Theorem • Constructing a Surface in 3-Space • Stokes' Theorem • Note on Curve and Vector Field in 3-Space • Example 1: Find the Circulation of F around the Curve 12:40 • Part 1: Question • Part 2: Drawing the Figure • Part 3: Solution ### Transcription: Stokes' Theorem, Part 1 Hello and welcome back to educator.com and multivariable calculus.0000 In the last lesson, we discussed the divergence theorem in 3-space. We generalized the flux divergence theorem, a version of Green's Theorem into 3-space.0004 Now we are going to discuss Stoke's Theorem, which in some sense is a generalization, is a... we are going to kick up a circulation curl form of Green's Theorem into 3-space.0012 So, let us go ahead and get started. Okay. So, let us start off by recalling the circulation curl form of Green's theorem.0023 So, recall, circulation curl version of Green's theorem. It looked like this.0031 The integral over that of f(c) · c' dt is equal to a, this time we have the curl dy dx, so what this basically says is that the circulation of a given vector field around a closed curve, so again we are talking about a region, that way we are traversing it in a counter-clockwise direction, keeping the region contained to our left.0052 The circulation of a vector field around this curve, as we move it around it from beginning to end, it is equal to the integral of the curl of that vector field over the area enclosed by that curve. That is it.0087 The flux concerned that version, that part of the vector field.0102 The circulation concerns this part of the vector field. To what extent is the vector field actually rotating vs. diverging.0107 That is all this is. Okay. So let us go ahead and draw a picture here. So we have this nice 3-dimensional picture.0119 So, now, what I am going to do. So, this is the x axis, this is the y axis and this is the z axis, so I am just going to draw a little region here, slightly triangular, looks kind of like a pizza.0126 This is our region and we are traversing it in a counter-clockwise direction, so this theorem actually applies to this region.0137 Now, what happens if I actually cut this out, cut out this region, lift it off the xy plane, so that now it is floating in 3-space, in xyz space.0147 So, I take that little piece and if I just deform it and turn it into a surface, here is what it looks like.0159 So, let us now lift this region out of the xy plane, deform it to construct a surface in 3-space.0167 What you get is something that looks like this. So, let me go ahead and draw this and that, and that, make this one a little bit longer, so again, I am not going to label the axes.0205 So what you have is something that looks like that... This goes that way... that way... that way...0218 So what I have done is I have taken this triangular region, which was this triangular region right here, and I have literally lifted it up, I have just peeled it off the page, and I have sort of deformed it, and I have actually turned it into some sort of a surface.0233 Well, on that surface there is a point. This can be parameterized. This is a parameterized surface. Our parameterization is going to be p(t,u).0250 This point on that surface is going to be some... some point, so that is some p. Well, there is a vector field on that surface, right?0262 There is vector fields here, here, here, right? Because a vector field, that is defined in a 3-dimensional region, is... they're are the points on the surface, we can actually apply it to that vector field.0270 So, we have f(p). In other words, the composition. Once we have a point, we can go ahead and take that point and put it into f, so there is some vector emanating from that point on that surface.0283 Well, we can also form the normal vector. That is all that is going on here. Notice, we still have a boundary, and we can still traverse this boundary, this time in 3-space. We are no longer in 2-space.0302 We can still traverse this boundary in the counter clockwise direction, except now, this boundary is not containing a region in 2-space. It is actually the boundary of a surface. That is all that is happening here.0317 So, this surface, is still a 2-dimensional object. The surface is still a 2-dimensional object, because that is what a surface is. It is still a 2-dimensional object. The surface has a boundary... and this surface has a boundary.0337 Okay. Now, if we traverse... let me draw the region again actually, just so we have it on this page as well. So, we had some sort of a triangular region, so let us say it's that, it's like that, yea, something like that.0368 If we traverse this boundary again, this boundary again, and this is our surface... If we traverse the boundary in a counter-clockwise direction, keeping the region to our left... clockwise direction... then, Stoke's Theorem says the following.0392 Stoke's Theorem says the integral over the boundary of f(c(t)) · c' dt is equal to... okay, here is where you have to be careful... the curl of f(p) · n dt du.0423 Okay, here is what this says. Remember when we did this surface integrals just in the last couple of lessons, we formed -- let me do this in red -- this surface, the integral of a given vector field over a surface, it was equal to this integral except instead of the curl we just did f(p) · n dt du.0466 Now, what we are doing. Instead of using our vector field, f, we are actually going to take the curl of that vector field.0495 Well, the curl of a vector field in 3-dimensions is another vector field, because you remember the curl of a vector field is a vector.0502 So at any given point, you are going to get another vector. So if I have one vector field, well at all the points on that surface where there is some vector, I can form the curl. So I get the curl vector field.0510 Instead of f, we are just using the curl of f. So, instead of forming f(p), we are forming the curl of f(p). That is it.0522 This is a derived vector field from the original f. That is what the curl is. It is a type of derivative. SO, we are deriving a new vector field from f.0533 What Stoke's Theorem says, it is actually the exact same thing as Green's Theorem, except now instead of a 2-dimensional flat region like this, all I have done is I have a surface in 3-space.0545 That surface still has a boundary. If I calculate the circulation of a vector field around the boundary of a surface in 3-space, it happens to equal the integral of the curl of the vector field over the surface itself.0556 This is a surface integral, this is a line integral. That is all this is. I have just generalized the circulation curl form of Green's Theorem. Now, into 3-dimensional space. I hope that is okay.0575 Now, a couple of things to note. This time, this c(t) in Green's Theorem, we were in R2, we were in 2-dimensional space, so our parameterization for a curve had 2 coordinate functions.0592 But, now, notice, this surface is in 3-space. Well, this boundary is still a curve and we can parameterize it with a curve except now it has 3 coordinate functions because now it is a curve in 3-space.0606 Note. So, now, c(t) is a curve in 3-space, not 2-space. It is a curve in 3-space. So it has 3 coordinate functions.0620 You will see what we mean in a minute when we actually start doing some examples. Okay.0643 f is, or I should say and, okay, and f is now a vector field in 3-space, not 2-space, so f has 3 coordinate functions. So, it has 3 coordinate functions.0648 Interestingly enough, Stoke's Theorem is the most natural generalization yet it is the strangest and most difficult to wrap your mind around, I think, because you are not accustomed to seeing a surface integral with a curl in it. That is what makes Stoke's Theorem different.0681 If you do not completely understand it, do not worry about it. Just deal symbolically. You deal symbolically until you actually -- you know -- get your sea legs with this stuff, then it is not a problem.0698 Just, find p, find the curl of f, form the composite function, form the dot product, put it in the integrand, find your domain of a parameterization for the integration, and just solve the integral until you get a feel for it.0707 Normally -- you know -- historically, Stoke's Theorem is the one that gives kids the biggest problem, because it seems to be the most unnatural.0724 Yet strangely enough, it is actually the most natural generalization, all you have done is taken a 2-dimensional region, imagine taking a piece of paper, which has the paper itself, the boundary is the edge, and just bending it like this.0732 When you have bent it, now you have created a surface. Well it is still a surface. It is still a flat 2-dimensional object. There is still a boundary around it, and now there is a surface, and there is a relationship between the integral of something around the boundary, and the integral over the surface.0742 Okay. Now, let us go ahead and just solve some problems. I think that will be the best way to handle this. So example 1.0760 Let p(t,Θ) = tcos(Θ), tsin(Θ), and Θ for t > or = 0, < or = 1, and Θ > or = 0, and < or = pi/2.0779 We will let f of xyz, so you notice... okay... equal to z, x, and y. Okay.0804 our task is to find the circulation of f around the curve. Let us draw this out and see what it looks like.0822 Okay. This parameterization, t goes from 0 to 1, Θ goes from 0 to pi/2. This is the parameterization of a spiral surface.0843 In other words, imagine a parking ramp. It is a spiral surface if I take a... if I am looking from the top, and I take some track like this, and if I split this and I lift it up, what I end up is having a track that goes up like that. A spiral.0856 So, what it actually looks like is it is this surface right here, actually spiraling up. Θ goes from 0 to pi/2.0879 This is y, this is x, so this is just that part of it. Only half of it.0898 t goes from 0 to 1. So, this is a length -- just, 1, 1, that is all it is.0905 Now, here is what is interesting. The boundary actually has 4 parts. There is this curve, c1, there is this curve, c2, there is this curve, which is c3, and there is this curve, which is c4.0915 Therefore, if I actually wanted to solve this. If I wanted to find the circulation of f around this curve, I have to parameterize 4 curves, c1, c2, c3, and c4.0940 I can do that, it is not a problem. I mean I can parameterize these things if I need to. But I do not need to because I have Stoke's Theorem at my disposal.0952 So, I am going to go ahead and solve these surface integrals. The surface is actually easily parameterized. In fact, they gave us the parameterization.0958 You definitely want to use Stoke's Theorem in this case.0967 Circulation... is this.0971 So, c = c1, c2, c3, c4, so it is the integral around c of f(c(t)) · c' dt. That is the circulation integral.0982 c is composed of these four curves. If I go all the way around again, I am traversing it this way. Keeping the region to my left. Well, I do not want to solve this via circulation.1004 So, if we solve the integral, if we solve the line integral directly, we will have to parameterize 4 curves. Parameterize 4 curves... and solve 4 integrals, which we definitely do not want to do.1016 Okay. But we have Stoke's Theorem, which says that f(c) · c' dt = so this is the curve, this is the surface, the curl of f(p) · n dt du.1052 Now, let us go ahead and run it through. Let us figure out what each of these is, put it in... okay.1085 So, let us go ahead and find n first. Let me do this in blue. So n, so n, let us go ahead and find dp dt.1095 When I take the partial of the parameterization with respect to t, I get cos(Θ)sin(Θ), and I get 0.1108 When I take the partial of the parameterization with respect to Θ, I get -tsin(Θ), I get tcos(Θ), and I get 1.1118 Well, n is equal to dp dt cross dp du -- oh, sorry, I already have... I do not need t and u, I need dp dt dp dΘ... There we go.1133 So, it is going to be dp dt cross dp dΘ, and when I go ahead and run that particular one, I end up with the following. I end up with sin(Θ) -cos(Θ), and t. That is n.1151 Now, when I take the curl, so I have taken care of n, now I need curl of f, well, the curl of f, I am going to actually do this one explicitly... i, j, k, d dx, d dy, d dz, of z, x, and y.1174 When I expand along the top row, I get 1, 1, and 1. So, that is the curl of f. Well, I need the curl of f(p). I need that. The curl of f is just... is (1,1,1). It is a constant vector.1202 There is no composition here, so I can just leave it as 1, 1, and 1. Well, the curl of f(p) dotted with the vector n is equal to sin(Θ) - cos(Θ) + t.1235 So, the integral that we are looking for, our circulation is t goes from 0 to 1, Θ goes from 0 to pi/2, our curl of f of p · n is sin(Θ) - cos(Θ) + t, and we did Θ first, so it is dΘ dt, and our final answer is pi/4.1265 There you go. So a circulation of this vector field happens to equal pi/4.1295 It is positive. What this means is that this vector field is actually rotating on that surface. That is what we have done. That is what Stoke's Theorem does.1300 Okay. Thank you for joining us here at Educator.com, for our first part of the discussion of Stoke's Theorem.1312 We will see you next time for the second part of the discussion of Stoke's Theorem. Bye-bye.1317 OR ### Start Learning Now Our free lessons will get you started (Adobe Flash® required).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621223211288452, "perplexity": 2781.843486903304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00442.warc.gz"}
http://mathhelpforum.com/trigonometry/171308-trig-identities-print.html
# Trig. Identities • Feb 14th 2011, 09:06 PM jonnygill Trig. Identities Hello! How do I solve the following equation... $2\cos^2\Theta=2+\sin^2\Theta$ First time using latex. • Feb 14th 2011, 09:08 PM dwsmith Quote: Originally Posted by jonnygill Hello! How do I solve the following equation... $2\cos^2\Theta=2+\sin^2\Theta$ First time using latex. Wasn't paying attention with my help. • Feb 14th 2011, 09:18 PM pickslides Quote: Originally Posted by jonnygill How do I solve the following equation... $2\cos^2\Theta=2+\sin^2\Theta$ By inspection $\displaystyle\Theta =0$ is a solution, or make $\displaystyle \cos^2\Theta = 1-\sin^2\Theta$ to solve it in terms of $\displaystyle\sin\Theta$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167777299880981, "perplexity": 3616.7531689251186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806620.63/warc/CC-MAIN-20171122175744-20171122195744-00766.warc.gz"}
http://clay6.com/qa/50414/the-equation-of-trajectory-of-a-projectile-is-given-by-y-x-large-frac-find-
The equation of trajectory of a projectile is given by $y=x -\large\frac{x^2}{100}$ Find velocity of projection . $\begin{array}{1 1}(A)\;\frac{1}{100} \\(B)\;\frac{1}{200} \\(C)\;\frac{1}{300}\\(D)\;2 \end{array}$ Comparing the given equation with the standard equation of trajectory. $y=x \tan \theta_0 -\large\frac{gx^2}{2v_0^2 \cos ^2 \theta _0}$ We find that $\tan \theta_0 =1$ and equation co-efficient of $x^2$ $\large\frac{g}{2v_0^2 \cos ^2 \theta _0}= \frac{1}{100}$ Hence A is the correct answer. answered Jul 15, 2014 by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748864769935608, "perplexity": 502.4346651858549}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00272.warc.gz"}
http://mathhelpforum.com/advanced-statistics/84264-pmf-print.html
# PMF Printable View • April 18th 2009, 03:37 AM panda* PMF 1. Let X be a random variable in N* = {1,2,....} such that $\forall k \in N^{*} P (X=k) = a3^{-k}$ (i) Determine a such that P is the probability mass function. (ii) Is X more likely to take even or odd values? (iii) If they exist, calculate E(x) and Var(x). 2. Ten phones are linked by only one line to the network. Each phone needs to log on on average 12mm an hour to the network. Calls made by different phones are independent of each other. They can't call simultaneously. (i) What is the probability that k phones (k = 0,1, ... , 10) simultaneously need the line? What is the most likely number of phones requiring the line at one time? (ii) We add lines to the network, allowing simultaneous calls. What is the minimum number c of lines to have so that on average there is no call overloading more than 7 times out of 1000? Thank you! • April 18th 2009, 06:59 AM matheagle I don't know what the star mean, usually it's multiplication, seems to mean such that here. But this looks like a geometric random variable. • April 18th 2009, 08:41 AM Moo Quote: Originally Posted by matheagle I don't know what the star mean, usually it's multiplication, seems to mean such that here. But this looks like a geometric random variable. Quote: N* = {1,2,....} I don't understand what you said >< Anyway, Quote: 1. Let X be a random variable in N* = {1,2,....} such that $\forall k \in N^{*} P (X=k) = a3^{-k}$ (i) Determine a such that P is the probability mass function. The sum of all possible values of the pmf has to be 1. So basically, find a such that $\sum_{k=1}^\infty a3^{-k}=a \sum_{k=1}^\infty 3^{-k}=1$ (this is merely a geometric series (Wink)) Quote: (ii) Is X more likely to take even or odd values? So let's find out the probability that X has even values. This is P(there exists k in N* such that X=2k)=P(X=2*1 OR X=2*2 OR X=2*3 OR ...) This can be written as $\mathbb{P}\left(\bigcup_{k=1}^\infty \{X=2k\}\right)$ And since the events $\{X=2k\}$ are disjoint for different k, this probability is : $\sum_{k=1}^\infty \mathbb{P}(X=2k)=\sum_{k=1}^\infty a3^{-2k}=a \sum_{k=1}^\infty 9^{-k}$ If you get that this probability is 1/2, then it means that the probability that X take odd values is 1/2 too (do you understand why ?) And this would mean that X has equal probability to get even or odd values. Quote: (iii) If they exist, calculate E(x) and Var(x). Do you think they exist ? Prove or disprove that $a \sum_{k=1}^\infty k3^{-k}$ converges
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147779941558838, "perplexity": 870.6104850985447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913039.71/warc/CC-MAIN-20151001221833-00195-ip-10-137-6-227.ec2.internal.warc.gz"}
https://srikanthperinkulam.com/tap/random-variables-properties/
# Random variables and their properties Properties of  PMF: 1. $p(x) \geqslant 0 \forall x$ 2. $\sum_{xin D}^{ } p(x) = 1$ Expected value : [E[X] = $mu$] 1. $E[X] = \sum x_ip(x_i)$ 2. $E[f(x)] = \sum f(x_i)p(x_i)$ 3. $E[aX+b] = aE[X] + b$ Variance Var[x]: 1. $Var[X] = E[(X-\mu)^2 ] = E[(X-E[X])^2] = \sigma ^2$ 2. $Var[X] = E[X^2] – E^2[X] = E[X^2] – (\mu)^2$ 3. $Var[X] = Var[E[X/Y]] + E[Var[X/Y]]$ 4. $Var[aX+b] = a^2Var[X]$ 5. $Var[aX+bY+c] = a^2Var[X] + b^2Var[Y] + 2abCov[X,Y]$ 6. If X and Y are independent random Variables: $Var[XY] = Var[X]Var[Y] + Var[X].E^{2}[Y] + Var[Y].E^{2}[X]$ – TBV If X and Y are independent r.v’s: 1. $f_{X,Y}(x,y) = f_X(x). f_Y(y)$ 2. $f_{X/Y}(x/y) = f_X(x)$ 3. $E[g(X). h(Y)] = E[g(X)].E[h(Y)]$ For any suitably integrable functions g and h. 4. $M_{X+Y}(t) = M_X(t)M_Y(t)$ 5. $Cov[X,Y] = 0$ 6. $Var(X+Y) = Var(X) + Var(Y) = Var(X-Y)$ Joint distributions: • Let X and Y be two discrete r.v’s with a joint p.m.f $f_{X,Y}(x,y) = P(X = x;Y = y)$. Note that the distributions (mass functions – $f_{X}(x) = P(X=x) = \sum_{y}f_{X,Y}(x,y)$ and – $f_{Y}(y) = P(Y=y) = \sum_{x}f_{X,Y}(x,y)$ are the marginal distributions of X and Y respectively. • If $f_Y(y) \neq 0$ the conditional p.m.f of X/Y=y is given by $f_{X/Y}(x/y) = \frac{f_{X,Y}(x,y)}{f_Y(y)}$ • $E[X/Y=y] = \sum_{x}xf_{X/Y}(x/y)$ and more generally $E[g(X)/Y=y] = \sum_{x}g(x)f_{X/Y}(x/y)$ • $Var[X/Y=y] = E[X^2/Y=y] – E^2[X/Y=y]$ • Note that $E[X/Y]$ is a random variable while $E[X/Y=y]$ is a number. • $E[E[X/Y]] = E[X]$   [Applies for any two r.v’s X and Y] • $E[E[g(X)/Y]] = E[g(X)]$ Covariance Cov[X,Y] and Correlation co-efficient: 1. $Cov[X,Y] = E[XY] – E[X]E[Y]$ 2. $Cov[aX+bY+c, dZ+eW+f] = adCov[XZ] + aeCov[XW] + bdCov[YZ] + beCov[YW]$ 3. Correlation Co-efficient = $\rho (X,Y) = \frac{Cov[XY]}{\sigma_X\sigma_Y}$ 4. $-1 \leq \rho (X,Y) \leq 1$ 5. Co-efficient of Variance = $\frac{\sigma}{\mu}$ Inequalities: Chebychev’s: $Var(X) \geq c^2 P(\left | X-\mu \right \| \geq c)$ Moment Functions: 1. $E[X^k] = k^{th}$ moment of X around 0 2. $E[(X-\mu)^k] = k^{th}$ moment of X about the mean $\mu$ = $k^{th}$ central moment of X 3. $E[(\frac{X-\mu}{\sigma})^3]$ = Measure of lack of symmetry  =Skewness • Positively skewed => $E[(\frac{X-\mu}{\sigma})^3] > 0$  => Skewed to the right • Negatively skewed => $E[(\frac{X-\mu}{\sigma})^3] < 0$ => Skewed to the left • Symmetric distribution => $E[(\frac{X-\mu}{\sigma})^3] = 0$ 4. $E[(\frac{X-\mu}{\sigma})^4]$ = Measure of peakness = Kurtosis Moment Generating Functions: 1. $M(t) = E[e^{xt}] = M_x(t) = \sum e^{xt}p(x) = \int_{-\infty }^{\infty }e^{xt}f(x)dx$ 2. MGF is a function of t 3. ${M}'(t) = E[xe^{xt}]$ 4. $M(0) = 1$ 5. ${M}'(0) = E[X]$ , ${M}”(0) = E[X^2]$, …. 6. $M^n(t) = E[x^ne^{xt}]$ 7. If X has an MGF $M_x(t)$ and if Y = aX+b then Y will have an MGF = $M_y(t) = e^{bt}M_x(at)$ Variance and Covariance: 1. $Var(X\pm Y) = Var(X)+Var(Y)\pm 2Cov(X,Y)$ 2. $Var(X+Y) = Var(X-Y)$ if X and Y are independent
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455811738967896, "perplexity": 2221.210143274536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00774.warc.gz"}
https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_110B%3A_Physical_Chemistry_II/Text/11%3A_Computational_Quantum_Chemistry/11.0%3A_Overview_of_Quantum_Calculations
# 11.0: Overview of Quantum Calculations ## Multielectron Electronic Wavefunctions We could symbolically write an approximate two-particle wavefunction as $$\psi (r_1, r_2)$$. This could be, for example, a two-electron wavefunction for helium. To exchange the two particles, we simply substitute the coordinates of particle 1 ($$r_l$$) for the coordinates of particle 2 ($$r_2$$) and vice versa, to get the new wavefunction $$\psi (r_1, r_2)$$. This new wavefunction must have the property that $|\psi (r_1, r_2)|^2 = \psi (r_2, r_1)^*\psi (r_2, r_1) = \psi (r_1, r_2)^* \psi (r_1, r_2) \label{9-38}$ Equation $$\ref{9-38}$$ will be true only if the wavefunctions before and after permutation are related by a factor of $$e^{i\varphi}$$, $\psi (r_1, r_2) = e^{i\varphi} \psi (r_1, r_2)$ so that $\left ( e^{-i\varphi} \psi (r_1, r_2) ^*\right ) \left ( e^{i\varphi} \psi (r_1, r_2) ^*\right ) = \psi (r_1 , r_2 ) ^* \psi (r_1 , r_2) \label{9-40}$ If we exchange or permute two identical particles twice, we are (by definition) back to the original situation. If each permutation changes the wavefunction by $$e^{i \varphi}$$, the double permutation must change the wavefunction by $$e^{i\varphi} e^{i\varphi}$$. Since we then are back to the original state, the effect of the double permutation must equal 1; i.e., $e^{i\varphi} e^{i\varphi} = e^{i 2\varphi} = 1$ which is true only if $$\varphi = 0$$ or an integer multiple of π. The requirement that a double permutation reproduce the original situation limits the acceptable values for $$e^{i\varphi}$$ to either +1 (when $$\varphi = 0$$) or -1 (when $$\varphi = \pi$$). Both possibilities are found in nature, but the behavior of elections is that the wavefunction be antisymmetric with respect to permutation $$(e^{i\varphi} = -1)$$. A wavefunction that is antisymmetric with respect to electron interchange is one whose output changes sign when the electron coordinates are interchanged, as shown below. $\psi (r_2 , r_1) = e^{i\varphi} \psi (r_1, r_2) = - \psi (r_1, r_2)$ Blindly following the first statement of the Pauli Exclusion Principle, that each electron in a multi-electron atom must be described by a different spin-orbital, we try constructing a simple product wavefunction for helium using two different spin-orbitals. Both have the 1s spatial component, but one has spin function $$\alpha$$ and the other has spin function $$\beta$$ so the product wavefunction matches the form of the ground state electron configuration for He, $$1s^2$$. $\psi (\mathbf{r}_1, \mathbf{r}_2 ) = \varphi _{1s\alpha} (\mathbf{r}_1) \varphi _{1s\beta} ( \mathbf{r}_2) \label{8.6.1}$ After permutation of the electrons, this becomes $\psi ( \mathbf{r}_2,\mathbf{r}_1 ) = \varphi _{1s\alpha} ( \mathbf{r}_2) \varphi _{1s\beta} (\mathbf{r}_1) \label{8.6.2}$ which is different from the starting function since $$\varphi _{1s\alpha}$$ and $$\varphi _{1s\beta}$$ are different spin-orbital functions. However, an antisymmetric function must produce the same function multiplied by (–1) after permutation, and that is not the case here. We must try something else. To avoid getting a totally different function when we permute the electrons, we can make a linear combination of functions. A very simple way of taking a linear combination involves making a new function by simply adding or subtracting functions. The function that is created by subtracting the right-hand side of Equation $$\ref{8.6.2}$$ from the right-hand side of Equation $$\ref{8.6.1}$$ has the desired antisymmetric behavior. The constant on the right-hand side accounts for the fact that the total wavefunction must be normalized. $\psi (\mathbf{r}_1, \mathbf{r}_2) = \dfrac {1}{\sqrt {2}} [ \varphi _{1s\alpha}(\mathbf{r}_1) \varphi _{1s\beta}( \mathbf{r}_2) - \varphi _{1s\alpha}( \mathbf{r}_2) \varphi _{1s\beta}(\mathbf{r}_1)]$​A linear combination that describes an appropriately antisymmetrized multi-electron wavefunction for any desired orbital configuration is easy to construct for a two-electron system. However, interesting chemical systems usually contain more than two electrons. For these multi-electron systems a relatively simple scheme for constructing an antisymmetric wavefunction from a product of one-electron functions is to write the wavefunction in the form of a determinant. John Slater introduced this idea so the determinant is called a Slater determinant. The Slater determinant for the two-electron wavefunction for the ground state $$H_2$$ system (with the two electrons occupying the $$\sigma_{1s}$$ molecular orbital) $\psi (\mathbf{r}_1, \mathbf{r}_2) = \dfrac {1}{\sqrt {2}} \begin {vmatrix} \sigma_{1s} (1) \alpha (1) & \sigma _{1s} (1) \beta (1) \\ \sigma _{1s} (2) \alpha (2) & \sigma_{1s} (2) \beta (2) \end {vmatrix}$ We can introduce a shorthand notation for the arbitrary spin-orbital $\chi_{i\alpha}(\mathbf{r}) = \varphi_i \alpha$ or $\chi_{i\beta}(\mathbf{r}) = \varphi_i \beta$ as determined by the $$m_s$$ quantum number. A shorthand notation for the determinant in Equation 8.6.4 is then $\psi (\mathbf{r}_1 , \mathbf{r}_2) = 2^{-\frac {1}{2}} Det | \chi_{1s\alpha} (\mathbf{r}_1) \alpha \chi_{1s\beta} ( \mathbf{r}_2) \beta|$ The determinant is written so the electron coordinate changes in going from one row to the next, and the spin orbital changes in going from one column to the next. The advantage of having this recipe is clear if you try to construct an antisymmetric wavefunction that describes the orbital configuration for uranium! Note that the normalization constant is $$(N!)^{-\dfrac {1}{2}}$$ for a system of $$N$$ electrons. The generalized Slater determinant for a multe-electrom atom with N electrons is then $\psi(\mathbf{r}_1, \mathbf{r}_2, \ldots, \mathbf{r}_N)=\dfrac{1}{\sqrt{N!}} \left| \begin{matrix} \chi_1(\mathbf{r}_1) \alpha & \chi_1(\mathbf{r}_1) \beta& \cdots & \chi_{N/2}(\mathbf{r}_1) \beta\\ \chi_1(\mathbf{r}_2) \alpha & \chi_2(\mathbf{r}_2)\beta & \cdots & \chi_{N/2}(\mathbf{r}_2)\beta \\ \vdots & \vdots & \ddots & \vdots \\ \chi_1(\mathbf{r}_N) \alpha & \chi_2(\mathbf{r}_N)\beta & \cdots & \chi_{N/2}(\mathbf{r}_N) \beta \end{matrix} \right| \label{slater}$ In a modern ab initio electronic structure calculation on a closed shell molecule, the electronic Hamiltonian is used with a single determinant wavefunction. This wavefunction, $$\Psi$$, is constructed from molecular orbitals, $$\psi$$ that are written as linear combinations of contracted Gaussian basis functions, $$\varphi$$ $\varphi _j = \sum \limits _k c_{jk} \psi _k \label {10.69}$ The contracted Gaussian functions are composed from primitive Gaussian functions to match Slater-type orbitals. The exponential parameters in the STOs are optimized by calculations on small molecules using the nonlinear variational method and then those values are used with other molecules. The problem is to calculate the electronic energy from $E = \dfrac {\int \Psi ^* \hat {H} \Psi d \tau }{\int \Psi ^* \Psi d \tau} \label {10.70}$ or in bra-ket notation $E = \dfrac {\left \langle \Psi |\hat {H} | \Psi \right \rangle}{\left \langle \psi | \psi \right \rangle}$ The the optimum coefficients $$c_{jk}$$ for each molecular orbital in Equation $$\ref{10.69}$$ by using the Self Consistent Field Method and the Linear Variational Method to minimize the energy as was described previously for atoms. The variational principle says an approximate energy is an upper bound to the exact energy, so the lowest energy that we calculate is the most accurate. At some point, the improvements in the energy will be very slight. This limiting energy is the lowest that can be obtained with a single determinant wavefunction (e.g., Equation $$\ref{slater}$$). This limit is called the Hartree-Fock limit, the energy is the Hartree-Fock energy, the molecular orbitals producing this limit are called Hartree-Fock orbitals, and the determinant is the Hartree-Fock wavefunction. Hartree-Fock Calculations You may encounter the terms restricted and unrestricted Hartree-Fock. The above discussion pertains to a restricted HF calculation. In a restricted HF calculation, electrons with $$\alpha$$ spin are restricted or constrained to occupy the same spatial orbitals as electrons with $$\beta$$ spin. This constraint is removed in an unrestricted calculation. For example, the spin orbital for electron 1 could be $$\psi _A (r_1) \alpha (1)$$, and the spin orbital for electron 2 in a molecule could be $$\psi _B (r_2) \beta (2)$$, where both the spatial molecular orbital and the spin function differ for the two electrons. Such spin orbitals are called unrestricted. If both electrons are constrained to have the same spatial orbital, e.g. $$\psi _A (r_1) \alpha (1)$$ and $$\psi _A (r_2) \beta (2)$$, then the spin orbital is said to be restricted. While unrestricted spin orbitals can provide a better description of the electrons, twice as many spatial orbitals are needed, so the demands of the calculation are much higher. Using unrestricted orbitals is particular beneficial when a molecule contains an odd number of electrons because there are more electrons in one spin state than in the other. Example $$\PageIndex{1}$$: Carbon Monoxide It is well known that carbon monoxide is a poison that acts by binding to the iron in hemoglobin and preventing oxygen from binding. As a result, oxygen is not transported by the blood to cells. Which end of carbon monoxide, carbon or oxygen, do you think binds to iron by donating electrons? We all know that oxygen is more electron-rich than carbon (8 vs 6 electrons) and more electronegative. A reasonable answer to this question therefore is oxygen, but experimentally it is carbon that binds to iron. A quantum mechanical calculation done by Winifred M. Huo, published in J. Chem. Phys. 43, 624 (1965), provides an explanation for this counter-intuitive result. The basis set used in the calculation consisted of 10 functions: the ls, 2s, 2px, 2py, and 2pz atomic orbitals of C and O. Ten molecular orbitals (mo’s) were defined as linear combinations of the ten atomic orbitals (Equation $$\ref{10.69}$$. The ground state wavefunction $$\Psi$$ is written as the Slater Determinant of the five lowest energy molecular orbitals $$\psi _k$$. Equation $$\ref{10.70}$$ gives the energy of the ground state, where the denominator accounts for the normalization requirement. The coefficients $$C_{kj}$$ in the linear combination are determined by the variational method to minimize the energy. The solution of this problem gives the following equations for the molecular orbitals. Only the largest terms have been retained here. These functions are listed and discussed in order of increasing energy. • $$1s \approx 0.94 1s_o$$. The 1 says this is the first $$\sigma$$ orbital. The $$\sigma$$ says it is symmetric with respect to reflection in the plane of the molecule. The large coefficient, 0.94, means this is essentially the 1s atomic orbital of oxygen. The oxygen 1s orbital should have a lower energy than that of carbon because the positive charge on the oxygen nucleus is greater. • $$2s \approx 0.92 1s_c$$. This orbital is essentially the 1s atomic orbital of carbon. Both the $$1\sigma$$ and $$2 \sigma$$ are “nonbonding” orbitals since they are localized on a particular atom and do not directly determine the charge density between atoms. • $$3s \approx (0.72 2s_o + 0.18 2p_{zo}) + (0.28 2s_c + 0.16 2p_{zc})$$. This orbital is a “bonding” molecular orbital because the electrons are delocalized over C and O in a way that enhances the charge density between the atoms. The 3 means this is the third $$\sigma$$ orbital. This orbital also illustrates the concept of hybridization. One can say the 2s and 2p orbitals on each atom are hybridized and the molecular orbital is formed from these hybrids although the calculation just obtains the linear combination of the four orbitals directly without the à priori introduction of hybridization. In other words, hybridization just falls out of the calculation. The hybridization in this bonding LCAO increases the amplitude of the function in the region of space between the two atoms and decreases it in the region of space outside of the bonding region of the atoms. • $$4s \approx (0.37 2s_c + 0.1 2p_{zc}) + (0.54 2p_{zo} - 0.43 2s_{0})$$. This molecular orbital also can be thought of as being a hybrid formed from atomic orbitals. The hybridization of oxygen atomic orbitals, because of the negative coefficient with 2sO, decreases the electron density between the nuclei and enhances electron density on the side of oxygen facing away from the carbon atom. If we follow how this function varies along the internuclear axis, we see that near carbon the function is positive whereas near oxygen it is negative or possibly small and positive. This change means there must be a node between the two nuclei or at the oxygen nucleus. Because of the node, the electron density between the two nuclei is low so the electrons in this orbital do not serve to shield the two positive nuclei from each other. This orbital therefore is called an “antibonding” molecular orbital and the electrons assigned to it are called antibonding electrons. This orbital is the antibonding partner to the $$3 \sigma$$ orbital. • $$1\pi \approx 0.32 2p_{xc} + 0.44 2p_{xo} \text {and} 2\pi \approx 0.32 2p_{yc} + 0.44 2p_{yo}$$. These two orbitals are degenerate and correspond to bonding orbitals made up from the px and py atomic orbitals from each atom. These orbitals are degenerate because the x and y directions are equivalent in this molecule. $$\pi$$ tells us that these orbitals are antisymmetric with respect to reflection in a plane containing the nuclei. • $$5\sigma \approx 0.38 2_{sC} - 0.38 2_{pC} - 0.29 2p_{zO}$$. This orbital is the sp hybrid of the carbon atomic orbitals. The negative coefficient for 2pC puts the largest amplitude on the side of carbon away from oxygen. There is no node between the atoms. We conclude this is a nonbonding orbital with the nonbonding electrons on carbon. This is not a “bonding” orbital because the electron density between the nuclei is lowered by hybridization. It also is not an antibonding orbital because there is no node between the nuclei. When carbon monoxide binds to Fe in hemoglobin, the bond is made between the C and the Fe. This bond involves the donation of the $$5\sigma$$ nonbonding electrons on C to empty d orbitals on Fe. Thus molecular orbital theory allows us to understand why the C end of the molecule is involved in this electron donation when we might naively expect O to be more electron-rich and capable of donating electrons to iron.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777785301208496, "perplexity": 439.99868335353773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00016.warc.gz"}
http://astarmaths.net/notes/a-level/d2/determination-of-consistency-of-set-of-sentences/
Determination of Consistency of Set of Sentences It's only fair to share... Determine if the following set of sentence is consistent or inconsistent: If John committed the murder, then he was in the victim’s apartment and did not leave before 11. John was in the victim’s apartment. If he left before 11, then the doorman saw him but it is not the case either that the doorman saw him or that he committed the murder. Define the statements: p: John committed the murder. q: John was in the victim’s apartment. r: John left before 11. s: The doorman saw him. The first sentence becomes (1) The second sentence becomes q (2) The third sentence becomes(3) We construct two truth tables – one for each of (1) and (3) The first is 1 0 1 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 0 0 1 0 1 1 1 1 0 0 1 0 0 0 1 0 1 0 0 1 0 The second is 1 0 1 0 0 1 1 1 0 1 1 0 0 1 1 1 1 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1 1 0 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 Examination of the tables gives that the statements are consistent ifand
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878973126411438, "perplexity": 46.01564143483644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00144.warc.gz"}
http://mathhelpforum.com/differential-geometry/142838-proofs-using-print.html
# Proofs using ε • May 3rd 2010, 11:22 AM renlok Proofs using ε I'm trying to do some revision for my analysis exam and when it comes to proofs where ε is used I dont understand what these proofs mean. Like for example Given $\epsilon > 0, \forall n \in N: \left|{x_n - l}\right| < \epsilon$ which is something to do with the limit of the sequence xn being l why not use $\left|{x_n - l}\right| = 0$? (Headbang) analysis just makes no sense D: • May 3rd 2010, 01:39 PM Plato Quote: Originally Posted by renlok why not use $\left|{x_n - l}\right| = 0$? Because $\left| {\frac{1}{n} - 0} \right| \ne 0,~~\forall n$. • May 3rd 2010, 01:42 PM In fact, all definitions related to limits contain these $\epsilon$ notations. The idea here is that for all $\epsilon>0$, you can still find values of n larger than some N>0 (this should be added to your statement: Not for all n!)and where the difference between $x_n$ and the limit is less than that $\epsilon$. In other words, the sequence is approaching the limit. In fact, you can't just say $\left|{x_n - l}\right| = 0$ because you may never find a value of $x_n$ that is equal to l. For example, if $x_n = \frac{1}{n}$, then we claim that the limit of $x_n$ is zero as n goes to infinity. However, you can't give any number n such that $\frac{1}{n} = 0!$. In fact, for any given $\epsilon$, look at $|\frac{1}{n}-0|=\frac{1}{n}$ which can be made less than $\epsilon$ if you choose sufficiently large n, namely $n>N = \frac{1}{\epsilon}$. This proves that the limit is zero! That is, for any given $\epsilon>0$, you can find a natural N such that if $n>N$, then $|\frac{1}{n}-0|<\epsilon$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886117577552795, "perplexity": 211.10533201281433}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660801.77/warc/CC-MAIN-20160924173740-00064-ip-10-143-35-109.ec2.internal.warc.gz"}
http://www.mathworks.com/matlabcentral/newsreader/view_thread/245842
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi # Thread Subject: USB-6509 in 2008a Subject: USB-6509 in 2008a From: Rob Campbell Date: 4 Mar, 2009 00:01:03 Message: 1 of 1 Hi, I'm using 2008a to control an NI USB-6509 board. As suggested here: http://www.mathworks.com/support/solutions/data/1-68VOCU.html?solution=1-68VOCU I added the following to my \$MATLABROOT\toolbox\daq\daq\private\mwnidaqmx.ini [USB-6509] hasAnalogInput = false hasAnalogOutput = false hasDigitalIO = true digitalPortSize = 8,8,8,8,8,8,8,8,8,8,8,8 digitalPortCap = 3,3,3,3,3,3,3,3,3,3,3,3 I can connect to the device and add all the ports lines (12 ports each with 8 lines). However, things don't work as planned. Right now the main problem is that: If I write to port2/line7 (labeled 1 on the NI port diagram) then this seems to also send the signal to port5/line7 (labeled 2). So it's as though lines 1 and 2 are crossed. This seems to work all the way down so 3 and 4 are also crossed. I don't have a hardware problem since the NI test utility works fine. Also, it looks like higher port numbers don't work at all, although I'm not sure from what point they stop working. Anybody know a fix to this? Thanks! Rob
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263070583343506, "perplexity": 3455.344389672779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051776/warc/CC-MAIN-20131204131731-00070-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/272548/entropy-of-average-of-two-distributions
# Entropy of average of two distributions Let $\mu,\nu$ be two distributions on the same discrete space. Is it true that $$\mathrm{H}\left(\frac{\mu+\nu}{2}\right) \ge \mathbb{E}_{xy}-\log\left(\frac{\sqrt{\mu(x)\nu(y)}}{2} + \frac{\langle\mu, \nu\rangle}{2}\right)$$ where $x\sim\mu$ and $y\sim\nu$ independently. Here $\mathrm{H}(\rho) = -\sum_{z}\rho(z)\log\rho(z)$ is the Shannon's entropy function and $\langle\mu,\nu\rangle =\sum_z{\mu(z)\nu(z)}$. Note that if $\langle\mu,\nu\rangle = 0$, then both LHS and RHS are $\frac{\mathrm{H}(\mu)+\mathrm{H}(\nu)}{2}+1$, so the inequality holds true. On the other extreme, if $\mu = \nu$, then by convexity of $z\mapsto-\log z$, we have $$\frac{1}{2}\mathrm{H}(\mu) + \frac{1}{2}\mathrm{H}_2(\mu) \ge \mathrm{RHS}$$ where $H_2$ is the second order Renyi entropy which is never bigger than the Shannon entropy. Since LHS is $\mathrm{H}(\mu)$ in this case, this shows the inequality is again true. I can also verify the inequality when $\mu,\nu$ are flat distributions. I wonder if this holds true for all $\mu, \nu$. Unfortunately, this is not true in general. Let $\mu=(x,1-x)$ for $x\in[0,1]$ and let $\nu=(1,0)$. Now LHS is certainly at most 1 (assume the logarithms are to the base 2), however, the maximum of RHS over $x$ is bigger than one as can be see here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988531231880188, "perplexity": 96.53629258636454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494424.70/warc/CC-MAIN-20190220024254-20190220050254-00199.warc.gz"}
https://tex.stackexchange.com/questions/636354/using-philex-how-can-i-change-the-spacing-between-subexamples
# Using Philex how can I change the spacing between subexamples? Please, can someone explain how to patch Philex to customize the spacing between examples, subexamples and subsubexamples, respectively. For example, if I want spacing between subexamples but no spacing between subsubexamples, and if I want even more spacing between examples than between subexamples. MWE : \documentclass[12pt,oneside]{article} \usepackage{philex} \usepackage{etoolbox} \newlength{\SubItemSkip} \setlength{\SubItemSkip}{.66\baselineskip} \makeatletter \patchcmd{\a}{\itemsep\z@}{\itemsep\SubItemSkip}{}{} \makeatother \begin{document} \lb{tests}{ \lba{inforx}{Possible modification by in/for \textit{x} time'.} \lbb{fc}{Possible modification by \textit{frequent}/\textit{constant}.} \lbb{control}{Possible Implicit Agent Control.} \lbz{byp}{\textit{By}-phrases and, when available, possessives realize arguments.}} \lb{examples}{ \lba{inforx-ex}{ \lba{inx-ex}{La destruction de la ville par l'armée en quelques heures nous a stupéfaits.} \lbz{forx-ex}{Le chef de service a ordonné la surveillance du patient par les internes durant plusieurs jours.}}} \end{document} • It would be helpful to show a compilable philex example that shows what you're doing. It seems likely that the solution I gave in the linked answer could be made to work but I know nothing about philex. Mar 8 at 16:09 • In fact, pasting the preamble code in my answer into a philex document seems to do what you want. If that's not what you want then explain what the problem is. Mar 8 at 16:19 • As I said, I'd like to differentiate between examples, subexamples and subsubexamples. I added a MWE. Thank you very much for your help. Mar 8 at 17:16 • I'm sorry but your example doesn't help me understand where you want the extra space. Mar 8 at 17:32 • I don't have a definitive idea, but for example if I want spacing between subexamples but no spacing between subsubexamples, and if I want even more spacing between examples than between subexamples. Mar 8 at 17:37 Here's some code that allows you to set both the \topsep and the \itemsep values for the sub- and subsub-examples. The spacing between examples at the top level is simply a new paragraph, so there's no way to set that spacing; you can add \medskip or bigskip between examples if you need to. In this sample code, I've set different values for each one; it probably makes sense to have identical \topsep and \itemsep values for any one level, however. Since this code only modifies linguex code, it will work equally well for linguex. \documentclass{article} \usepackage{philex} \usepackage{etoolbox} \newlength{\lngxtopsepii} \setlength{\lngxtopsepii}{10pt}% linguex uses .3\Extopsep by default \newlength{\lngxtopsepiii} \setlength{\lngxtopsepiii}{5pt} \newlength{\lngxitemsepii} \setlength{\lngxitemsepii}{10pt} \newlength{\lngxitemsepiii} \setlength{\lngxitemsepiii}{5pt} \makeatletter \patchcmd{\a}{\ifnum\theExDepth=2\topsep .3\Extopsep\else\topsep 0pt\fi \parsep\z@\itemsep\z@} {\ifnum\theExDepth>1\topsep\csname lngxtopsep\roman{ExDepth}\endcsname \itemsep\csname lngxitemsep\roman{ExDepth}\endcsname\else\topsep\z@\itemsep\z@\fi \parsep\z@}{}{} \makeatother \begin{document} \lbp{clauses}{PP}{Some main words, followed by \lba{first}{Time flies \lba{firstnew}{Like an arrow} \lbz{lastnew}{And much too fast}} \lbb{second}{But never stops} \lbz{last}{Which is lucky} and a concluding comment.} \medskip \lbp{clauses}{PP}{Some main words, followed by \lba{first}{Time flies \lba{firstnew}{Like an arrow} \lbz{lastnew}{And much too fast}} \lbb{second}{But never stops} \lbz{last}{Which is lucky} and a concluding comment.} \end{document} ## Code explantion The main example command in linguex is the \a macro, which sets up the \list environment that underlies the example. So we use the patching facilities of the etoolbox package to modify the definition of \a. The \patchcmd macro takes four arguments: the part of the original macro to be replaced, the replacement, and two arguments for "patch successful" and "patch unsuccessful". As long as we know the patch will work then these can safely be left empty. So in the specific example here, I have replaced the linguex code that sets the \topsep and \itemsep of the list to code that sets the values dependent on the level of the list. I've named the lengths used with ii and iii suffixes so that I can construct the value from the list depth value using \roman{ExDepth}. ## Output • Amazing, you are the best, Alan. Mar 14 at 21:20 • @Vincent I've added some explanation of the code. Mar 14 at 22:19 • Ah ok ! So \roman{ExDepth} actually corresponds to the "ii" and "iii" suffixes added at the end of the "\lngxitemsep" and "\lngxtopsep" command names, and this is what lngxitemsep\roman{ExDepth} and lngxtopsep\roman{ExDepth} do , i.e. as arguments of \csname they define the suffixed command names ! Which is very flexible since it theoretically allows for deeper levels (within the limits inherent to the package, of course...) Mar 14 at 23:17 • @Vincent Yes, this is correct. Although the kernel supports nesting up to 5 levels, linguex` itself only defines labels up to 2 levels, so it's not practically extendable. Mar 15 at 0:56 • And for the space between the label and the sentence, this is : \SubExLeftmargin and \SubSubExleftmargin (thanks to frabjous as he kindly answered here tex.stackexchange.com/a/638271/262813). Mar 24 at 17:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923577666282654, "perplexity": 3074.861739088828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00553.warc.gz"}
https://declaredesign.org/r/designlibrary/articles/pretest_posttest.html
Pre-test post-test designs are designs in which researchers estimate the change in outcomes before and after an intervention. These designs are often preferred to post-test-only designs (which simply compare outcomes between control and treatment group after treatment assignment), because they enable much more efficient estimation and more informed assessment of imbalance. Nevertheless, baseline measurement often comes at a cost: when faced with budget constraints, researchers may be forced to decrease endline sample size in order to facilitate a baseline. Whether it is worth doing so often depends on how well the baseline will predict outcomes at endline. Furthermore, there is much debate about how best to estimate treatment effects in such designs: when are researchers better off using change scores versus conditioning on the baseline? Below we consider the example of a pre-test post-test applied to a study that seeks to evaluate the effect of a family-planning program on the incidence of teenage pregnancy. ## Design Declaration • Model: We define a population of size $$N$$, where effect at time $$t = 1$$ (pre-program) and $$t = 2$$ (post-program) are taken from a normal distribution of mean 0 and standard deviation smaller than 1. We assume pre- and post-test outcomes to be highly and positively correlated ($$\rho = 0.5$$). We also expect subjects to leave the study at a rate of 10%, meaning we do not observe post-treatment outcomes for a tenth of the sample. • Inquiry: We wish to know the average effect of family pregnancy programs $$Z$$ on rates of teenage pregnancy. Formally: $$E[Y(Z = 1) - Y(Z = 0) \mid t = 2]$$, where $$Z = 1$$ denotes assignment to the program. • Data strategy: We observe the incidence of teenage pregnancy ($$Y_i$$) for individual $$i$$ for a sample of 100 individuals at time $$t = 1$$ (just prior to treatment) and at time $$t = 2$$ (a year after treatment). We randomly assign 50 out of 100 women between the ages of 15 and 19 to receive treatment. We define three estimators. First, we estimate effects on the change score’’: the dependent variable is defined as the difference between observed post- and pre-treatment outcomes. The second estimator treats only the post-treatment outcome as the dependent variable, but conditions on the pre-treatment outcome on the righthand side of the regression. Finally, we also look at effects when we only use post-test outcome measures, so as to evaluate the gain from using a baseline. N <- 100 ate <- 0.25 sd_1 <- 1 sd_2 <- 1 rho <- 0.5 attrition_rate <- 0.1 population <- declare_population(N = N, u_t1 = rnorm(N) * sd_1, u_t2 = rnorm(N, rho * scale(u_t1), sqrt(1 - rho^2)) * sd_2, Y_t1 = u_t1) potential_outcomes <- declare_potential_outcomes(Y_t2 ~ u_t2 + ate * Z) estimand <- declare_estimand(ATE = mean(Y_t2_Z_1 - Y_t2_Z_0)) assignment <- declare_assignment() report <- declare_assignment(prob = 1 - attrition_rate, assignment_variable = R) reveal_t2 <- declare_reveal(Y_t2) manipulation <- declare_step(difference = (Y_t2 - Y_t1), handler = fabricate) pretest_lhs <- declare_estimator(difference ~ Z, model = lm_robust, estimand = estimand, subset = R == 1, label = "Change score") pretest_rhs <- declare_estimator(Y_t2 ~ Z + Y_t1, model = lm_robust, estimand = estimand, subset = R == 1, label = "Condition on pretest") posttest_only <- declare_estimator(Y_t2 ~ Z, model = lm_robust, estimand = estimand, label = "Posttest only") pretest_posttest_design <- population + potential_outcomes + estimand + assignment + reveal_t2 + report + manipulation + pretest_lhs + pretest_rhs + posttest_only ## Takeaways diagnosis <- diagnose_design(pretest_posttest_design, sims = 25) ## Warning: We recommend you choose a higher number of simulations than 25 for the ## top level of simulation. Estimator Label N Sims Bias RMSE Power Coverage Mean Estimate SD Estimate Mean Se Type S Rate Mean Estimand Change score 25 0.01 0.14 0.08 1.00 0.26 0.15 0.21 0.00 0.25 (0.03) (0.02) (0.05) (0.00) (0.03) (0.02) (0.00) NA (0.00) Condition on pretest 25 -0.01 0.13 0.20 1.00 0.24 0.13 0.18 0.00 0.25 (0.02) (0.01) (0.08) (0.00) (0.02) (0.01) (0.00) (0.00) (0.00) Posttest only 25 -0.02 0.16 0.16 1.00 0.23 0.16 0.20 0.00 0.25 (0.03) (0.02) (0.08) (0.00) (0.03) (0.02) (0.00) NA (0.00) • We see that the change score approach is less powerful than even the naive estimator! That’s because it essentially sums the variances from both periods. Any time invariant noise is being compounded by summing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831674337387085, "perplexity": 3515.9760454317575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00059.warc.gz"}
https://asmedigitalcollection.asme.org/ESDA/proceedings-abstract/ESDA2010/49156/351/349944
For their high mechanical characteristic (capacity resistance, hardness and impact resistance), the stainless steels remain not easily replaceable materials. This material can be used in significant fields such as the nuclear power, the storage of the chemical products. This work presents fatigue crack growth of austenitic stainless steel 316L at constant amplitude loading. The double through crack at hole specimen is used where the influence of dimension of hole, maximum amplitude loading and stress ratio are studied. The Nasgro model was used to prevent the fatigue crack growth. The effect of the stress ratio is highlighted, where one notices a shift of the curves of crack growth. The increasing of dimension of hole and the maximum amplitude loading decrease the fatigue life. Different situations permit to select the optimized hole. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816428542137146, "perplexity": 1554.9810965806228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00146.warc.gz"}
http://math.stackexchange.com/users/25159/matt-pressland?tab=activity&sort=suggestions
# Matt Pressland less info reputation 1642 bio website people.bath.ac.uk/mdp33 location Bath, United Kingdom age 24 member for 2 years, 5 months seen 9 hours ago profile views 802 Postgraduate student at the University of Bath, UK, studying geometry and representation theory. Currently thinking about cluster algebras and related objects. # 24 Suggestions Jun20 suggested suggested edit on Transition functions of toric projective bundle (Proposition in [Cox, Toric Varieties]) Jun20 suggested suggested edit on Quantile and percentile terminology Jun19 suggested suggested edit on How can I determine these 3 variables? Jun15 suggested suggested edit on Solve Falkner Skan Numerically? Jun13 suggested suggested edit on Definition of neighborhood and open set in topology Jun11 suggested suggested edit on Solving an equation with precondition Jun8 suggested suggested edit on Simplify this expression with radical signs Jun8 suggested suggested edit on Pure braid group, stabilizer Jun6 suggested suggested edit on Combinatorics: Distributing different toys Jun6 suggested suggested edit on Combinatorics: Selecting objects arranged in a circle Jun4 suggested suggested edit on Find the largest value of $d$ of the form $d = A\varepsilon^b$ such that the following statement is correct: May29 suggested suggested edit on Pseudocompactness in the $m$-topology May29 suggested suggested edit on How to show that the identity is in closure of $O' \subset C(\mathbb R)$? May24 suggested suggested edit on Question about proving something using reverse Fatou's lemma May24 suggested suggested edit on Cyclotomic field in proof of quadratic reciprocity May24 suggested suggested edit on How to solve an equation involving the whole part of a number? May23 suggested suggested edit on Interior of a Subspace May23 suggested suggested edit on Please correct my answer (Probability) May22 suggested suggested edit on Height of ideal in graded ring May21 suggested suggested edit on Non-identity Orthogonal Transformation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255180716514587, "perplexity": 3827.9980440333284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00063-ip-10-33-131-23.ec2.internal.warc.gz"}
https://olivierlaligant.fr/noise-estimation/
# Noise estimation A noisy image and noise measure images The NOISE ESTIMATOR NOLSE A METHOD INHERTIED FROM THE NON LINEAR DERIVATIVE >>> MATLAB Code <<< • a noise estimator based on the step signal model • uses polarized/directional derivatives and a nonlinear combination of these derivatives to estimate the noise distribution (Gaussian, Poisson, speckle, etc.) • does not rely only on the smallest amplitudes in the signal or image • efficient for any distribution of noise : does not make assumptions regarding the distribution of the noise and is able to measure additive noise as well as multiplicative noise, salt-and-pepper noise, etc. • the moments of this measured distribution can be computed and are also calculated theoretically on the basis of noise distribution models • the asymetry of the noise distribution can be accurately estimated if the noise distribution is not dense as in the case of salt-and-pepper noise is low complexity • limitations : does not completely eliminate the edge points in 2D because it is sensitive to fine textures (1D pixel width), the pdf is not easy to calculate for any noise distribution (however numerical solutions can be found) Ref: O. Laligant, F. Truchetet and E. Fauvet, « Noise Estimation From Digital Step-Model Signal, » in IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 5158-5167, Dec. 2013 . How ?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965932309627533, "perplexity": 1538.5906404653394}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00390.warc.gz"}
http://math.stackexchange.com/questions/101944/at-which-points-is-this-function-continuous
# At which points is this function continuous? At which points is the following function continuous? $$\begin{eqnarray*} f(x) = \begin{cases} 5x, &\text{if }x \in\mathbb Q, \\ x^2-6, &\text{if }x \notin\mathbb Q. \end{cases} \end{eqnarray*}$$ - I've done nothing because I have no idea where to start from. –  Gigili Jan 24 '12 at 11:00 Is this homework? what have you tried? –  Dennis Gulko Jan 24 '12 at 11:01 $Q$ is the set of rational number? –  Paul Jan 24 '12 at 11:01 When is $5x = x^2-6$? –  lhf Jan 24 '12 at 11:01 @Gigili, no, -1 and 6. –  lhf Jan 24 '12 at 11:12 Consider any point $x\in\mathbb{R}$ and assume that $f$ is continuous at $x$. You can find two sequences $\{a_n\}\subset\mathbb{Q}$ and $\{b_n\}\subset\mathbb{R}\setminus\mathbb{Q}$ such that $\lim a_n=\lim b_n=x$ (do you know why they exist and/or how to find those?). Now use the Heine property for continuity to say that $$5x=\lim 5a_n=\lim f(a_n)=f(\lim a_n)=f(\lim b_n)=\lim f(b_n)=\lim b_n^2-6=x^2-6$$ Now you can find $x$. And having found $x$ you must still prove that the function is continuous at $x$, since using just two sequences won't prove it. –  Marc van Leeuwen Jan 24 '12 at 11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039551615715027, "perplexity": 435.5211847854011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929656.4/warc/CC-MAIN-20150521113209-00326-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/273088/second-order-differential-equation-frobenius-method/273152
Second-Order Differential Equation--Frobenius Method I'm studying for a qualifying examination and am stuck on the following question. Could anyone give me some help? $$y''+\frac{\sin x}{x}y'+\frac{2\cos(x+x^2)-\frac{2}{(x-1)^2}+4x}{x^2}y=0$$ Find all singular points of the equation and classify them as regular/irregular. Then find the first term in a series in powers of $x-1$ for each of two linearly independent solutions as $x\rightarrow1$. I think the singular points are 0 and 1, and they are both regular. But I am having some trouble with the series solution. - Do you know how the ansatz for the series near a regular point looks like? – Fabian Jan 8 '13 at 21:46 By the way: 0 is not a singular point... – Fabian Jan 8 '13 at 21:47 How is 0 not a singular point? And the ansatz is $y(x)=\sum_{n=0}^{\infty}a_n x^n.$ – Alex Jan 8 '13 at 21:50 $\lim_{x \rightarrow 0} \sin{x}/x = 1$ and $\lim_{x \rightarrow 0} [2 \cos{(x +x^2)} - 2 (x-1)^{-2} + 4 x]/x^2 = -7$. Because the singularities of the coefficients of the diff eq'n are removable at $x=0$, there is no singularity of the solution at $x=0$. – Ron Gordon Jan 8 '13 at 22:39 $x=0$ is an ordinary point as the singularities of the coefficients are removable. There is a regular singular point at $x=1$. To analyze this point, you should approximate the ODE near this point (expand the coefficients in $z=x-1$): $$y'' + O(1) y' + \left[-\frac{2}{(x-1)^2} + O(x-1)^{-1}\right] y =0 .$$ Now you know that there is at least one solution has the form ($c_0\neq0$) $$y(z)= z^{\alpha} \sum_n c_n z^n.$$ We can determine $\alpha$ by plugging this ansatz in the ODE: to lowest order $$\alpha(\alpha-1) z^{\alpha-2} c_0 - \frac{2}{z^2} c_0 z^\alpha=0$$ which is valid when $\alpha (\alpha-1) =2$. The solutions are $$\alpha_1 =-1, \text{ and } \alpha_2 = 2.$$ So the first term for one solution is $$y_1(x) = c_0 (x-1)^2.$$ What happened to the $x^2$ in the denominator of the "y" term? Also, can you briefly explain how you got to the "approximated differential equation? Thanks! – Alex Jan 9 '13 at 21:16 @Ales: just Taylor expansion around $x=1$... – Fabian Jan 9 '13 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073273539543152, "perplexity": 248.67311678466135}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146196.88/warc/CC-MAIN-20160205193906-00304-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-understanding-metric-tensor.970590/
# I Help Understanding Metric Tensor • Start date #### dsaun777 13 0 I am trying to get an intuition of what a metric is. I understand the metric tensor has many functions and is fundamental to Relativity. I can understand the meaning of the flat space Minkowski metric ημν, but gμν isn't clear to me. The Minkowski metric has a trace -1,1,1,1 with the rest being zero but how do I "find" what the metric is in General Relativity? I know that the metric is spatially dependent and locally flat, but how exactly does it depend on spacetime? I heard sources say that the metric is simply given or stated for the type of curved coordinates that you happen to be working in. From the metric derivatives you can get the Christoffels, from the derivatives of the Christoffels you can get the Riemann and from the Riemann you can get he Ricci curvature...etc. but how do you get the metric to begin with? I know its related to the Newtonian potential for the 00 components 1/2g00⇒Φ I am sort of lost when it comes to understanding the other components. Related Special and General Relativity News on Phys.org #### kent davidge 692 26 I am trying to get an intuition of what a metric is. I understand the metric tensor... It's what you said, a tensor. I can understand the meaning of the flat space Minkowski metric ημν, but gμν isn't clear to me The difference is that the Minkowski metric $\eta_{\mu \nu}$ might be used to describe Minkowski spacetime, while a general metric $g_{\mu \nu}$ is used to describe a more general spacetime. how do I "find" what the metric is in General Relativity? This is something tensors are good at. You already know the Minkowski metric. On a general spacetime, "general" coordinates must be used. If you know how they are related to the "local" coordinates, you will automatically know the general metric, via $$g_{\mu \nu} = \frac{\partial \xi^\alpha}{\partial x^\mu} \frac{\partial \xi^\beta}{\partial x^\nu} \eta_{\alpha \beta}$$ You can always do that because of the Equivalence Principle. I know that the metric is spatially dependent and locally flat, but how exactly does it depend on spacetime? The metric is locally flat by the Equivalence Principle. The explicit form of its dependence on the coordinates will depend on your choice of coordinate system, which in turn is dictated by the spacetime you are working with. For example, Minkowski spacetime allows you to choose coordinates such that the metric doesn't depend at all on either space or time coordinates. but how do you get the metric to begin with? I know its related to the Newtonian potential for the 00 components 1/2g00⇒Φ I am sort of lost when it comes to understanding the other components. This is valid only for the situation where you assume three things: - that the gravitational field is weak - that matter is moving in non relativistic velocities - that the gravitational field is static I believe that the other (non-vanishing) components of the metric $g_{ik}$ are not completely fixed by this approach because you can always perform Lorentz rotations or boosts which will still keep the three assumptions invariant. #### Nugatory Mentor 11,957 4,467 The metric is locally flat by the Equivalence Principle. It's the other way around. Because the spacetime is locally flat, the equivalence principle works. #### kent davidge 692 26 By the way, don't let yourself get confused by symbols like $g_{\mu \nu}$ and $\eta_{\alpha \beta}$ for the metric coefficients, or $x^\mu$ and $\xi^\alpha$ for the coordinates. When I started studying Relativity, I had a hard time understanding the concepts just because of this notational mess. It doesn't matter at all what symbols you use as long as you keep track of what you are doing, you are free to use whatever symbols you want. #### kent davidge 692 26 It's the other way around. Because the spacetime is locally flat, the equivalence principle works. I like to take the Equivalence Principle as the dictator of how spacetime should behave. Do you think there's a problem with that? #### dsaun777 13 0 It's what you said, a tensor. The difference is that the Minkowski metric $\eta_{\mu \nu}$ might be used to describe Minkowski spacetime, while a general metric $g_{\mu \nu}$ is used to describe a more general spacetime. This is something tensors are good at. You already know the Minkowski metric. On a general spacetime, "general" coordinates must be used. If you know how they are related to the "local" coordinates, you will automatically know the general metric, via $$g_{\mu \nu} = \frac{\partial \xi^\alpha}{\partial x^\mu} \frac{\partial \xi^\beta}{\partial x^\nu} \eta_{\alpha \beta}$$ You can always do that because of the Equivalence Principle. The metric is locally flat by the Equivalence Principle. The explicit form of its dependence on the coordinates will depend on your choice of coordinate system, which in turn is dictated by the spacetime you are working with. For example, Minkowski spacetime allows you to choose coordinates such that the metric doesn't depend at all on either space or time coordinates. This is valid only for the situation where you assume three things: - that the gravitational field is weak - that matter is moving in non relativistic velocities - that the gravitational field is static I believe that the other (non-vanishing) components of the metric $g_{ik}$ are not completely fixed by this approach because you can always perform Lorentz rotations or boosts which will still keep the three assumptions invariant. So the metric in flat space can be transformed into the metric in GR using the property of tensor transformation? Another way to view the metric is just that gij=eiej where ei and ej are the basis vectors of your coordinate system correct? Pardon my symbolic illiteracy I'm fairly new. So in your equation you're changing reference frame with respect to another reference frame. Can you choose any coordinate basis you want as long as they transform properly? #### Nugatory Mentor 11,957 4,467 So the metric in flat space can be transformed into the metric in GR using the property of tensor transformation? No. The metric of flat space can be transformed from one coordinate system to another (for example, from Cartesian coordinates to polar coordinates) but it's still the same metric tensor describing the same flat spacetime, just written in a different way. (It's also just as much "the metric of GR" as is the metric for the curved spacetime around a star; both are solutions to the Einstein field equations, just for different configurations of matter). The metric tensor takes as input the coordinates of two nearby points and outputs the distance between them. Every possible shape of spacetime has a particular metric tensor associated with it; in fact, the metric tensor defines the spacetime and vice versa. Coordinate transformations don't change the spacetime or the relationship between points in it, they just change the way we label these points. Can you choose any coordinate basis you want as long as they transform properly? Yes, and that's the power and beauty of tensor methods: they don't care what coordinates we use. Just be careful not to confuse the same metric written in different coordinates with two different metrics. For example, I can write the metric of a flat two-dimensional plane using either polar or Cartesian coordinates; but no coordinate transformation will turn that metric into the fundamentally different metric of the two-dimensional surface of a sphere. #### Nugatory Mentor 11,957 4,467 I like to take the Equivalence Principle as the dictator of how spacetime should behave. Do you think there's a problem with that? No inherent problem.... but the equivalence principle is an approximation that will always fail if examined closely enough so is a poor candidate for dictator. We've already started with the postulate that spacetime is a pseudo-Riemannian manifold; this implies local flatness, which in turn justifies the equivalence principle as an approximation. There's no natural way of running the logic in the other direction. Of course the equivalence principle is a powerful motivation for choosing the postulate in the first place, and that's how the historical development went. #### PeroK Homework Helper Gold Member 2018 Award 9,072 3,278 I am trying to get an intuition of what a metric is. I understand the metric tensor has many functions and is fundamental to Relativity. I can understand the meaning of the flat space Minkowski metric ημν, but gμν isn't clear to me. The Minkowski metric has a trace -1,1,1,1 with the rest being zero but how do I "find" what the metric is in General Relativity? I know that the metric is spatially dependent and locally flat, but how exactly does it depend on spacetime? I heard sources say that the metric is simply given or stated for the type of curved coordinates that you happen to be working in. From the metric derivatives you can get the Christoffels, from the derivatives of the Christoffels you can get the Riemann and from the Riemann you can get he Ricci curvature...etc. but how do you get the metric to begin with? I know its related to the Newtonian potential for the 00 components 1/2g00⇒Φ I am sort of lost when it comes to understanding the other components. Are you studying GR in your master's? That information would help a lot in terms of the replies you might get. A metric is a "distance function". It tells you the distance between any two points by specifying the distance for infinitesimal changes in the coordinates. You can get the distance between any two points, along a given path, by integrating along this path. In this way the metric actually defines the geometry. This is the beauty of differential geometry: all the information you need about the geometry of whatever you are studying - in this case spacetime - is encapsulated in that one "differential" equation. Now, for every choice of coordinates your metric will take a different form. But, it's still the same metric, just expressed in a different form dependent on your choice of coordinates. Take any two points in spacetime and the same path between them and the distance must be the same regardless of the coordinates you chose. The metric is a tensor: it exists independent of any coordinate system and its components transform accordingly when you move from one coordinate system to another. Note that it is not always easy to tell when two metrics are the same, i.e. describe the same spacetime geometry and are related by a coordinate transformation. You can have a very unflat looking metric that, with a change of coordinates, transforms into the usual metric for flat spacetime. The analogy is that Euclidean 3D space is the same space in Cartesian and spherical coordinates, even though the metric looks very different in each case. If you have a region of spacetime, how do you determine the metric? You could go out and measure everything. You label enough points in your spacetime and measure the distance between any two neighbouring points. That would directly determine your metric. (There is a story that Gauss actually wanted to try this: to measure the distance bewteen mountain tops to see whether space is actually Euclidean. Obviously he would have only got one measurement and he would have had no idea that the time coordinate might be relevant.) Alternatively, you could study the stress-energy sources in your spacetime and write down the Einstein Field equations (EFE) and solve them! That's what Schwarzschild did in 1916 for the simple case of a spherically symmetric mass surrounded by vacuum. Because the EFE are non-linear they are very difficult to solve. In this way GR is much harder than Electromagnetism in that there is not a wide range of scenarios available to be studied. As the equations admit far fewer available solutions. That's why you are often simply given a metric to work with. #### pervect Staff Emeritus 9,373 728 I am trying to get an intuition of what a metric is. I understand the metric tensor has many functions and is fundamental to Relativity. I can understand the meaning of the flat space Minkowski metric ημν, but gμν isn't clear to me. The Minkowski metric has a trace -1,1,1,1 with the rest being zero but how do I "find" what the metric is in General Relativity? I know that the metric is spatially dependent and locally flat, but how exactly does it depend on spacetime? I heard sources say that the metric is simply given or stated for the type of curved coordinates that you happen to be working in. From the metric derivatives you can get the Christoffels, from the derivatives of the Christoffels you can get the Riemann and from the Riemann you can get he Ricci curvature...etc. but how do you get the metric to begin with? I know its related to the Newtonian potential for the 00 components 1/2g00⇒Φ I am sort of lost when it comes to understanding the other components. Let's talk about a 2 dimensional flat space for the moment, a plane. Then the metric (more precisely the line element) in cartesian coordinates (x,y) is $dx^2 + dy^2$. And the metric in polar coordinates $r, \theta$ is $dr^2 + r^2\,d\theta^2$. The general philosophy is that a plane is a plane, a geometric entity regardless of whether one uses cartesian coordinates or polar coordinates. We can go further, and say that the coordinates are just lablels, the labels for cartesian coordinates $x,y$ are just different labes than the labels for polar coordinates $r, \theta$. Sometimes I get pushback on this simple statement. I regard it as pretty basic, but I suppose if there is pushback on the idea, it's better if you (the original poster or reader) have some doubts, it's best if you think about it, and if you still don't agree, then post something. I can move on a bit to something that is not a plane, but the two-dimensional surface of a sphere, which is a different geometric enity. But I'm out of time, and it's probably best not to try and do to much. Adding more dimensions, including time, is another topic I'll skip for the moment that might be of interest. #### dsaun777 13 0 Are you studying GR in your master's? That information would help a lot in terms of the replies you might get. A metric is a "distance function". It tells you the distance between any two points by specifying the distance for infinitesimal changes in the coordinates. You can get the distance between any two points, along a given path, by integrating along this path. In this way the metric actually defines the geometry. This is the beauty of differential geometry: all the information you need about the geometry of whatever you are studying - in this case spacetime - is encapsulated in that one "differential" equation. Now, for every choice of coordinates your metric will take a different form. But, it's still the same metric, just expressed in a different form dependent on your choice of coordinates. Take any two points in spacetime and the same path between them and the distance must be the same regardless of the coordinates you chose. The metric is a tensor: it exists independent of any coordinate system and its components transform accordingly when you move from one coordinate system to another. Note that it is not always easy to tell when two metrics are the same, i.e. describe the same spacetime geometry and are related by a coordinate transformation. You can have a very unflat looking metric that, with a change of coordinates, transforms into the usual metric for flat spacetime. The analogy is that Euclidean 3D space is the same space in Cartesian and spherical coordinates, even though the metric looks very different in each case. If you have a region of spacetime, how do you determine the metric? You could go out and measure everything. You label enough points in your spacetime and measure the distance between any two neighbouring points. That would directly determine your metric. (There is a story that Gauss actually wanted to try this: to measure the distance bewteen mountain tops to see whether space is actually Euclidean. Obviously he would have only got one measurement and he would have had no idea that the time coordinate might be relevant.) Alternatively, you could study the stress-energy sources in your spacetime and write down the Einstein Field equations (EFE) and solve them! That's what Schwarzschild did in 1916 for the simple case of a spherically symmetric mass surrounded by vacuum. Because the EFE are non-linear they are very difficult to solve. In this way GR is much harder than Electromagnetism in that there is not a wide range of scenarios available to be studied. As the equations admit far fewer available solutions. That's why you are often simply given a metric to work with. I have BS in mathematics and I am studying relativity independently from Wisner and Thorne's massive book Gravitation. I took up to Special relativity at university So GR is pushing my comfort level of physics and mathematics, which is always a good thing! So I could actually go out and with high precision instruments, say a gyroscopic light measuring device, and measure "points" in a given spacetime and construct a suitable metric? #### Nugatory Mentor 11,957 4,467 So I could actually go out and with high precision instruments, say a gyroscopic light measuring device, and measure "points" in a given spacetime? You cannot measure points, but you can measure the distance between them, and that's basically what the metric is telling you. So in principle you could check your measurements against calculations using the metric to see whether you in fact have the right metric for the spacetime we live in. For example, very precise measurements could in principle tell you that the distance between the point "10 meters east of this stake in the ground" and "ten meters north of this stake in the ground" is not quite exactly $10\sqrt{2}$ meters (and the interior angles of the triangle do not add up to exactly 180 degrees) and therefore the surface of the earth is not a flat Euclidean plane - $ds^2=dx^2+dy^2$ is not giving exactly the right relationship between the lengths. An added complication with spacetime is that time is also a coordinate, so a measurement like the one I described above requires that the distance measurements are between the same points at the same time. However, "at the same time" just means "has the same time coordinate" so any attempt to analyze the spatial geometry carries an implicit assumption about how we're assigning time coordinates; that's what @pervect was getting at when he said that Gauss wasn't aware of this pitfall. (In the example above I glossed over this pitfall by choosing a coordinate system in which the spatial coordinates of my three points (stake, ten meters north, ten meters east) do not change with time). Do I have to make my metric dimensionless? You can save yourself much grief by setting $c=1$. This is tantamount to measuring time in seconds and distances in light-seconds, or measuring distance in meters and time in units of the amount of time it takes for light to move one meter. I am studying relativity independently from Wisner and Thorne's massive book Gravitation. That's an excellent text, but the initial learning curve is a bit steep. You might want to take a quick swing through Sean Carroll's no-nonsense intro and then go back to MTW; it's a lot easier to follow MTW after you've seen the quick overview. #### pervect Staff Emeritus 9,373 728 I have BS in mathematics and I am studying relativity independently from Wisner and Thorne's massive book Gravitation. I took up to Special relativity at university So GR is pushing my comfort level of physics and mathematics, which is always a good thing! So I could actually go out and with high precision instruments, say a gyroscopic light measuring device, and measure "points" in a given spacetime and construct a suitable metric? For a spatial metric, all you need to be able to do to construct a metric is to be able to measure the distance between any two nearby points, with some sort of mechanism that I'll call "a ruler". You don't need a gyroscope, as we are not yet dealing with time - you just some way of measuring distances. You need to have an appropriate set of labels for these points, and they have to be "smooth". With a BS in math, you may (or may not?) appreciate that you can define a 1:1 invertible mapping between points on a line and points on a plane. This sort of labelling is not "smooth" in the needed sense, you can't construct a metric if you use this sort of non-smooth labelling of points. Point set topology introduces the necessary ideas to understand "smoothness". (This is different from other sorts of topics that are still called topology). But that's more abstract than you really need to get, though if your focus is on the math and not the physics, you might find it interesting. MTW (the book you are studying) won't cover this sort of math, though, it's a physics textbook and not a math textbook. Walds text, "General Relativity", touches a bit on the math, but it is not as interesting a read as MTW. Once you have an appropriate smooth labelling of your set of points on the geometric surface you are studying, you can construct the metric as long as you have some way of measuring the distances between nearby points. The space-time metric is slightly more abstract. Rather than measuring the distances between points, you measure the Lorentz intervals between events. And events are just things that happen in a specific place at a specific time, since we are studying space-time. The coordinate labels we assign to events in space-time are similar to the labels we use to describe spatial geometry, but they include an additional label for "time". The time label is arbitrary, just as the spatial labels are - the tensor approach allows this sort of arbitrary labelling. Hopefully the Lorentz interval is familiar to you from your study of special relativity. If your treatment of SR didn't stress the Lorentz interval (many don't), you might try "Space-time Physics" by Taylor and Wheeler, in addition to MTW. You still don't need a gyroscope to find the space-time metric. With the introduction of time, one introduces the possibility of rotation. It turns out one can duplicate the functions of a gyroscope by measure the Lorentz interval between all points. Getting into the details would take us off-topic, but I will say that one can construct a mathematical model of a ring laser gyroscope, and this is my justification for saying that one doesn't need to postulate the existence of gyroscopes, one simply needs to postulate the existence of the Lorentz interval as a quantity that is independent of the observer and represents the physics one is studying. The space-time metric of flat, empty space is the Minkowskii metric, usually written as $$ds^2 = -dt^2 + dx^2 + dy^2 + dz^2$$ You can see that there is a minus sign in front of the "time" term. This makes the metric not positive definite - it's called a Lorentzian metric. Light beams in a vacuum are a convenient physical representation of a curve with a zero-length Lorentz interval. It may take some thought to figure out how one can use knowledge of the Lorentz interval to measure other things. It turns out that the key instrument for measuring Lorentz intervals is the clock, which measures something called proper time, which is the timelike interval along the worldline of the clock. This is a different sort of time than the coordinate label that we assigned to time earlier was that was arbitrary. The proper time is something that can be measured with a clock (often called wristwatch time), and it's not a matter of convention, it's a physical quantity that can be measured. Having a clock and using the property that light in a vacuum travels a path of zero Lorentz interval, it's possible to measure various observer-dependent notion of "length" or "spatial distance" between points. It's an observer dependent notion, because of so-called "Lorentz contraction", so we expect different observers to have different notions of "spatial distance". "Help Understanding Metric Tensor" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027832746505737, "perplexity": 296.63443406573526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256812.65/warc/CC-MAIN-20190522123236-20190522145236-00424.warc.gz"}
http://mathhelpforum.com/calculus/11198-stuck-tricky-integral-print.html
# Stuck on a tricky integral • February 5th 2007, 01:21 PM phack Stuck on a tricky integral I tried get a tidy image of the integral with Integrator, but it told me the answer does not exist. I'm almost certain it does however. (1+ln(x))*sqrt(1+(xln(x))^2) Help! I tried integration by parts but it turns into a jumbled mess, unless I'm overlooking something. • February 5th 2007, 01:24 PM phack Wait a second, I just had some inspiration. I'll let you know If I truley need help... • February 5th 2007, 01:45 PM phack Ok, hmmm. I got it down to the integral of: sqrt(1+u^2) by setting xlnx=u and 1+lnx dx = du. Im stuck now :confused: How do I integrate sqrt(1+u^2) ? • February 5th 2007, 04:24 PM topsquark Quote: Originally Posted by phack Ok, hmmm. I got it down to the integral of: sqrt(1+u^2) by setting xlnx=u and 1+lnx dx = du. Im stuck now :confused: How do I integrate sqrt(1+u^2) ? Good job spotting the substitution. Now what you want to do is let $u = sinh(y)$. Then $du = cosh(y)dy$, so your integral turns into: $\int (1 + xln(x)) \sqrt{1 + (x ln(x))^2)} dx = \int \sqrt{1 + u^2} \, du =$ $\int \sqrt{1 + sinh^2(y)} \cdot cosh(y)dy = \int cosh^2(y) dy$ $= \int \frac{1}{4}(e^{2y} + e^{-2y} + 2)dy = ...$ You can finish this from here. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931524991989136, "perplexity": 2325.9871501957846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657118950.27/warc/CC-MAIN-20140914011158-00217-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://link.springer.com/article/10.1023%2FA%3A1012495106338
Article Studia Logica , Volume 68, Issue 2, pp 173-228 First online: # A Kripke Semantics for the Logic of Gelfand Quantales • Gerard AllweinAffiliated withVisual Inference Laboratory, Indiana University • , Wendy MacCaullAffiliated withDept. Mathematics, Statistics and Computer Science, St. Francis Xavier University Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract Gelfand quantales are complete unital quantales with an involution, *, satisfying the property that for any element a, if aba for all b, then aa* ⊙ a = a. A Hilbert-style axiom system is given for a propositional logic, called Gelfand Logic, which is sound and complete with respect to Gelfand quantales. A Kripke semantics is presented for which the soundness and completeness of Gelfand logic is shown. The completeness theorem relies on a Stone style representation theorem for complete lattices. A Rasiowa/Sikorski style semantic tableau system is also presented with the property that if all branches of a tableau are closed, then the formula in question is a theorem of Gelfand Logic. An open branch in a completed tableaux guarantees the existence of an Kripke model in which the formula is not valid; hence it is not a theorem of Gelfand Logic. lattice representations quantales frames Kripke semantics semantic tableau
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788247108459473, "perplexity": 1312.0942181565274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050960463.61/warc/CC-MAIN-20160524004920-00012-ip-10-185-217-139.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=395932
# Numerical Optimization ( norm minim) by sbashrawi Tags: minim, norm, numerical, optimization HW Helper P: 3,307 consider the set $\left\{ x \in \mathbb{R}^n : ||x||^2 = c\}$ for some constant c, geometrically what does it represent? now consider the half space, the boundary of which is a plane. how does the plane intersect the above set, in particular for the minimum value of c. This should lead to a simple solution P: 55 Numerical Optimization ( norm minim) Here is my work: f(x) = ||x|| ^2 subjeted to c(x) = a^{T} x +$$\alpha$$$$\geq$$0 so L ( x,$$\lambda$$) = f(x) - $$\lambda$$ c(x) gradiant L(x,$$\lambda$$) = 2x - $$\lambda$$ grad(c(x)) = 0 grad c(x)= a so 2x - $$\lambda$$a = 0 this gives that x = $$\frac{}{}[1/2]\lambda$$a and c(x) = 0 gives : $$\lambda$$ = -2 $$\alpha$$$$/ ||a||^2 imlpies x = - \alpha$$ a / ||a||^2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769374132156372, "perplexity": 1946.2444316812964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535925433.20/warc/CC-MAIN-20140901014525-00321-ip-10-180-136-8.ec2.internal.warc.gz"}
https://en.m.wikibooks.org/wiki/Real_Analysis/Applications_of_Integration
# Real Analysis/Applications of Integration Real Analysis Applications of Integration Integrals are used primarily to work with derivations. Even though this chapter is titled "Applications of Integration", the term "application" does not mean that this chapter will discuss how integration can be used in life. Instead, the following theorems we will present are focused on illustrating methods to calculate integrals and defining properties. ## Theorems on Computation This heading will deal with deriving computation formulas for integration. Although the results of the Fundamental Theorem of Calculus yields a method of calculating integrals that involves having knowledge of the primitive beforehand, it is by no means the only way to compute integrals—especially if the integral is anything more complex than a power function. ### The Primitive The Definition of Primitive of f Given a function f, and any other function g, if ${\textstyle \left[\int {f}\right]'=\left[\int {g}\right]'}$ , f and g differ by a constant and the primitive is defined as the function that satisfies ${\textstyle \left[\int {f}\right]'=\left[\int {g}\right]'}$  with a possible constant C. f actually refers to the derivative of the integral, not the function within. Thus, the Primitive of f is a statement that requires a reference to the function one is derivating over, not the output of that derivation. This is also the antiderivative We will first take a small detour into the nature of what a primitive is. Recall from the Fundamental Theorem of Calculus that, essentially, there exists a function ${\displaystyle f}$  such that they appear when the derivative of ${\textstyle \int _{a}^{x}{f}}$  is applied. If we use different variables than from the theorem (They use F and f), we can display this process through the following steps ${\displaystyle u=\int _{a}^{x}{f}\Longleftrightarrow u'={\frac {\operatorname {d} }{\operatorname {d} \!x}}\left[\int _{a}^{x}{f}\right]\implies u'=f}$ We use the implication arrow for the last step to highlight a point. The final step, although logically is an equivalent statement—and thus an iff sign is fine in that position, it does not mean that the reverse is easy. In fact, only mathematics outside of Real Analysis will rigorously show how reverting that step may be more complex than moving forward. Thus, most first-year real analysis courses will only ask you so much as only moving forward. Yet, there are some functions whose primitives are easily found. Almost by design of the educators of mathematics, every function studied in elementary mathematics, including those rigorously defined in the Functions section; the Trigonometry chapter; and the Exponential and Logarithmic headings, have easily defined primitives that is easily derived through the difficult mental exercises of working with the Table of Derivatives in reverse (with leniency offered to things like the power functions and trigonometric functions – trigonometric functions are among the hardest in this list to derive). Despite that, we will provide a table below, so that you do not have to work through the mental exercise. Note Tables like this makes it look as if both integration and differentiation cancel each other out and makes it look like both primitives are equal. It doesn't and it isn't! Tables like this are often the basis of these confusions. If one's confused, make sure to read the entire material to understand the nuances! Conversion Table for Primitives Is Becomes Derivative is Indefinite Integral is Antiderivative is Primitive is ${\displaystyle a}$  ${\displaystyle ax+C}$ ${\displaystyle e^{x}}$  ${\displaystyle e^{x}+C}$ ${\displaystyle {\frac {1}{x}}}$  ${\displaystyle \log {}+C}$ ${\displaystyle x^{n}}$  ${\displaystyle {\frac {x^{n+1}}{n+1}}+C}$ ${\displaystyle a^{x}}$  ${\displaystyle {\frac {a^{x}}{\log a}}+C}$ ${\displaystyle \sin }$  ${\displaystyle -\cos {}+C}$ ${\displaystyle \cos }$  ${\displaystyle \sin {}+C}$ ${\displaystyle \tan }$  ${\displaystyle -\log {\cos {}}+C}$ ${\displaystyle \sec ^{2}{}}$  ${\displaystyle \tan {}+C}$ ${\displaystyle \sec {}\tan {}}$  ${\displaystyle \sec {}+C}$ ${\displaystyle {\frac {1}{1+x^{2}}}}$  ${\displaystyle \arctan {}+C}$ ${\displaystyle {\frac {1}{\sqrt {1-x^{2}}}}}$  ${\displaystyle \arcsin {}+C}$ ${\displaystyle \sin ^{2}}$  ${\displaystyle {\frac {1-\cos {2x}}{2}}+C}$ ${\displaystyle \cos ^{2}}$  ${\displaystyle {\frac {1+\cos {2x}}{2}}+C}$ ${\displaystyle \arctan }$  ${\displaystyle x\cdot \arctan {}-1/2\cdot \log {(1+x^{2})}+C}$ ${\displaystyle \sec }$  ${\displaystyle \log {(\sec {}+\tan {})}+C}$ ${\displaystyle \csc }$  ${\displaystyle -\log {(\csc {}+\cot {})}+C}$ Hah! If you noticed, there is, on each primitive, a constant variable C around. Coupled with the rather indirect definition of a primitive, it appears as if the concept of the primitive itself warrants an explanation. Case in point, because of one major consequence of applying the Fundamental Theorem of Calculus and a derivation theorem together (specifically the one that states that if ${\displaystyle f'=g'}$ , then ${\displaystyle f=g+c}$ ), a constant variable C must be added in order to be an algebraically correct conversion. As a consequence of this requirement, although the derivation of an integration commonly, to use a laymen term, "cancels" both operations, it logically is not a full cancellation—especially when the primitive f is not known. However, there are two facts that, when accepted, make the constant C issue immensely more manageable when working with integrals. 1. For the intents and purposes of integral computation (a trick with indefinite integral definitions), one does not need to worry about constants until the final result is computed. 2. Computing the constant value C is easy if the function's properties allow for easy numerical values (such as its roots) to show. ### The Indefinite Integral This will act as a small aside, but is essential to understand the rest of this chapter. What if one does not wish to write out all those steps above? One wishes to instead simply acknowledge the multiple interpretations of what a primitive is without being bogged down by explicitly declaring your function f as an antiderivitive while still writing the primitive by virtue of filling out the f in what ${\displaystyle f+C}$  is. Simple. Mathematicians agreed to the definition of an indefinite integral, which is defined as ${\displaystyle \int {f}:=\left\{f+C:\forall C\in \mathbb {R} ,f={\frac {\operatorname {d} }{\operatorname {d} \!x}}\left[\int _{a}^{x}{f}\right]\right\}}$ which, simply put, means the set of all primitives of that statement with the integral and the derivative. Curiously, this definition is implied when talking about the primitive, and, more importantly, this definition is the technical output when derivating an integral. ### Integration by Parts Theorem Given a differentiable function u and v, if ${\displaystyle u'}$  and ${\displaystyle v'}$  are continuous, then ${\displaystyle \int {uv'}=uv-\int {u'v}}$  is a valid equation Why are the variables u and v instead of the more traditional f and g? Well, using the variables u and v is the traditional naming convention for integration by parts! Aside from confusing traditions, it serves a more pragmatic reason as well, namely that a given function f can be a combination of two functions u and v, which can be placed into this equation to yield a calculated answer for f. This is an important theorem which is extremely easy to derive as well (it only involves algebra and equations). Assuming one knows the derivation of a function composed of multiplying two other functions, the proof is as follows. Given the derivation of multiplication equation, we know that firstly, we can rearrange the equation in the following manner, and that secondly, we know that the functions are continuous as well, since derivatives of functions imply that the function is continuous to begin with. {\displaystyle {\begin{aligned}(uv)'&=u'v+uv'\\u'v&=(uv)'-uv'\end{aligned}}} The function u′ and v′ is assumed to be continuous. Thus, now that everything is continuous, we can apply an integral over the entire equation and still have a valid statement. {\displaystyle {\begin{aligned}(uv)'&=u'v+uv'\implies \\\int {u'v}&=\int {(uv)'-uv'\operatorname {d} \!x}\end{aligned}}} After some algebraic manipulations, which includes using the Fundamental Theorem of Calculus, the final result is as shown. {\displaystyle {\begin{aligned}\int {u'v}&=\int {(uv)'-uv'\operatorname {d} \!x}\\&=\int {(uv)'}-\int {uv'}\\&=uv-\int {uv'}\\\end{aligned}}}  ${\displaystyle \blacksquare }$ #### Notation One may have noticed, especially if one has read any other literature on mathematics, of a notation format used when discussing about integration. For example, the full notation for integration by parts is ${\displaystyle \int u(x)v'(x)\,\operatorname {d} \!x=u(x)v(x)-\int v(x)\,u'(x)\operatorname {d} \!x}$ but is often written as ${\displaystyle \int u\,\operatorname {d} \!v=uv-\int v\,\operatorname {d} \!u}$ The symbols ${\displaystyle \operatorname {d} \!u}$  and ${\displaystyle \operatorname {d} \!v}$  are not to be meant in the normal sense as discussed in the section on integrals, namely that the variable after the d represents the variable that will be treated as such and every other variable as a constant. The varibale instead refers to a function. Thus, the symbol is being used in a new manner, which can be succinctly described by defining the specific cases ${\displaystyle \operatorname {d} \!v:=v'(x)\operatorname {d} \!x}$ ${\displaystyle \operatorname {d} \!u:=u'(x)\operatorname {d} \!x}$ Note that this new definition is non-conflicting with the original definition of ${\displaystyle \operatorname {d} \!x}$ , since this one refers to when the variable on the right is a function instead of a variable. ### Integration by Substitution Theorem Given a differentiable function g, if some function ${\displaystyle f}$  and ${\displaystyle g'}$  are continuous, then ${\displaystyle \int _{g(a)}^{g(b)}{f\operatorname {d} \!x}=\int _{a}^{b}{f\circ g\cdot g'}}$  is a valid equation. Unlike Integration by Parts, no reference to a variable u is made. This is because of complex relationships between how the variable u, g, and f relate in this overall theorem, which will be explained in another heading. For now, let's focus on making sure that this statement is even valid to begin with. This formula is, luckily, also easy to validate. Like Integration by Parts, it too does not use a lot of theorems, only relying on the Second Fundamental Theorem of Calculus and the Chain Rule. Define the function Λ as the primitive of f, which is valid given that a continuous function f implies that ${\displaystyle \int _{g(a)}^{g(b)}{f}}$  is subject to the First Fundamental Theorem of Calculus (Note that this primitive can be defined as the actual primitive – the function being integrated, instead of the primitive that allows for a variable constant, since the First Fundamental Theorem of Calculus shows that a primitive that equals the function being integrated does indeed exist). We can then apply the Second Fundamental Theorem of Calculus to derive a computable equation. {\displaystyle {\begin{aligned}\int _{g(a)}^{g(b)}{f}\implies \exists \Lambda :\Lambda '&=f\\\land \int _{g(a)}^{g(b)}{f}&=(\Lambda \circ g)(b)-(\Lambda \circ g)(a)\end{aligned}}} However, let's instead imagine we input the function g into Λ instead of the usual input variable x. Now, since Λ is a composition function whose derivative should still equal f (after all, we only changed the input, not the function), we can apply Chain Rule on the function Λ. ${\displaystyle (\Lambda \circ g)'=\Lambda '\circ g\cdot g'=f\circ g\cdot g'}$ Now, if we should take an integral over ${\displaystyle (\Lambda \circ g)'}$ , what should the interval be that will make things interesting? Clearly, the interval of [a, b] since if one were to use the Second Fundamental Theorem of Calculus, ${\displaystyle \int _{a}^{b}(\Lambda \circ g)'=(\Lambda \circ g)(b)-(\Lambda \circ g)(a)}$ which makes these two statements equal. ${\displaystyle \therefore \int _{a}^{b}{f\circ g\cdot g'}=\int _{g(a)}^{g(b)}{f}.\;\;\blacksquare }$ As mentioned earlier, this equation speaks little of how it can be applied to compute integration statements. Given that much of the content on how integration by substitution is done through other websites or other wikibooks (take Wikipedia's page on Integration by Substitution, which provides detailed examples on how to use this method, or take Calculus as an example, whose page discusses method of Integration by Substitution as well), the remainder of this heading will discuss how the steps they teach relate to this theorem. First and foremost, this theorem stands out from what others normally teach because this uses definite integrals, unlike Integration by Parts which uses indefinite integrals. This is because the theorem, and its explanations, are best illustrated using definite integrals, as it highlights where the functions go in order to equate both statements. However, the bounds making up the definite integral can be easily "canceled" through derivation to create the easier-to-work-with theorem elementary mathematics teaches, Theorem Given a differentiable function g, if a function ${\displaystyle f}$  and ${\displaystyle g'}$  are continuous, then ${\displaystyle \int {f}=\int {f\circ g\cdot g'}}$  is a valid equation. Also notated as ${\displaystyle \int {f(x)}\operatorname {d} \!x=\int {(f\circ u)(x)\operatorname {d} \!u}{\text{ or as }}\int {(f\circ u)(x)\;{\frac {\operatorname {d} \!u}{\operatorname {d} \!x}}\;\operatorname {d} \!x}}$ Also known as u-substitution. One may be interested in noting that when Leibniz's notation is used here, it almost appears as if the algebraic operation of division is used for the ${\displaystyle \operatorname {d} \!x}$  operator. This is a possible motivation for why this notation continues to exist today, as certain theorems can be easily expressed by adhering to algebraic-like properties. #### Usage of Integration by Substitution Unlike Integration by Parts, which explanation is easily bundled up through derivation of multiplication and indefinite integral explanation, this theorem requires plenty of explanation in order to understand how its used. Aside from why it uses definite integrals, which was brushed over earlier and will be implied throughout this heading, we will first discuss what the functions f, g, and u represent. Note that in nearly all cases (unless you define ${\displaystyle g(x)=x}$ , which doesn't result in an easier calculation), the function you see being integrated as well as masquerading as f in the left-side of the theorem statement will actually be a composition function of f and g, where g is the function one wishes to manipulate and f is the other parts of the overall function to be integrated. So, if the example is ${\displaystyle \int {(2x+5)(x^{2}+5x)^{7}}}$ then the function ${\displaystyle f\circ g=(2x+5)(x^{2}+5x)^{7}}$  can be expressed as ${\displaystyle f=x'\cdot x^{7}\land g=(x^{2}+5x)^{7}}$ So for most cases, the method of Integration by Substitution is actually sussing out the function g from the overall composition function ${\displaystyle f\circ g=(2x+5)(x^{2}+5x)^{7}}$ , defining u = g, and reducing the overall function into one where integration can be simply derived. So for this example, ${\displaystyle f\circ g=g'\cdot g^{7}}$ which can easily be computed since now the only focus is on g7 i.e. apply Integration by Substitution. So, the variable u used in most explanations will simply equal g under simple circumstances. For more complex situations, u will not equal g – which is a situation explained in the next heading. #### Inverting Integration by Substitution The usual method of applying Integration by Substitution, explained further in other wikibooks, is to find some function u = g(x) that uses the input variable x in the overall statement f whose derivative can also be found in the overall statement f, substitute it in for the statement f – making sure that this new function can replace all instances of the input variable x along with other terms with u, and apply the theorem. However, it is possible to do the inverse. What do we mean? Instead of finding a function u = g(x) that can replace the input variable x, one can find the inverse of the function x = g-1(u) that can cancel out sections of the overall statement f as well as the input variable x and then integrate for u. In other words, if one ever requires to "move" – which would involve inverting u the function – the function from the dx side over to the du side, this corollary is being used. We know that for the variables x and u, inverting is as easy as inverting the function g and using the variables x and u as respective inputs. However, do we know whether or not we can "move" functions from one dx to the other du by replacing it with the inverse? After all, dx and du are not variables, but operators. The following corollary proves yes. We are aware from the previous heading that the definition ${\textstyle \int {f}}$  actually refers to an function composed of f and g – the f in the indefinite integral is not the same f. As such, given the following, we can expand the ${\displaystyle \int {h}=\int {f\circ g}}$ Let's write out the equation in full notation. ${\displaystyle \int {f\circ g}=\int {f\circ g(x)\operatorname {d} \!x}}$ Now let's check what happens to the equation when we define ${\displaystyle x=g^{-1}(u)}$  and ${\displaystyle \operatorname {d} \!x=[g^{-1}]'(u)\operatorname {d} \!u}$ . ${\displaystyle \int {f\circ g(x)\operatorname {d} \!x}=\int {f(u)\cdot [g^{-1}]'(u)\;\operatorname {d} \!u}}$ Returning to the original full notation, let's multiply by 1. Note that g'(x) must never equal 0. But how would that be known? ${\displaystyle \int {f\circ g}=\int {f\circ g(x)\cdot {\frac {1}{g'(x)}}\cdot g'(x)\operatorname {d} \!x}}$ Easy! Suppose ${\displaystyle u=g(x)}$  and ${\displaystyle \operatorname {d} \!u=g'(x)\operatorname {d} \!x}$ . Note that we also know that ${\displaystyle (g^{-1}\circ u)(x)=x}$ , which is used in the equation. ${\displaystyle \int {f\circ g(x)\operatorname {d} \!x}=\int {f(u)\cdot {\frac {1}{g'\circ g^{-1}(u)}}\operatorname {d} \!u}}$ Apply the equivalent definition of an inverse function. ${\displaystyle \int {f\circ g(x)\operatorname {d} \!x}=\int {f(u)\cdot [g^{-1}]'(u)\;\operatorname {d} \!u}}$ Both equations, although assuming different values for different variables, both remain equal not only on the left side but also on the right side. This implies that whichever variable one chooses, integration by substitution can still be used. ${\displaystyle \blacksquare }$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 90, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482966661453247, "perplexity": 450.76361746491426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00236.warc.gz"}
https://www.postnetwork.co/classification-of-second-order-partial-differential-equations/
# Classification of Second Order Partial Differential Equations If z is a function of two independent variables x and y. Then second order partial differential equation Rr+Ss+Tt+ f(x,y,z,p,q)=0 Where r, s and t are and R, S and T are continuous functions of x and y in domain of xy-plane. Then if S2-4RT > 0 Then equation is hyperbolic S2-4RT= 0 Then equation is parabolic S2-4RT= 0 Then equation is elliptic Example- 3 uxx + 6 uxy + 2 uyy = 0 R=3, S= 6 and T=2 And S2-4RT= 36 – 24= 12 >0 Then the equation is hyperbolic.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124265670776367, "perplexity": 2367.0776106949943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00401.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3850472
# Use standard identities to express by menco Tags: express, identities, standard P: 42 1. The problem statement, all variables and given/known data Use standard identities to express sin(x+pi/3) in terms of sin x and cos x 2. Relevant equations sin(a+b)=sinAcosB+sinBcosA 3. The attempt at a solution sin(x)cos(pi/3)+sin(pi/3)cos(x) 0.5sinx + 0.8660cosx I'm just not sure if i need to simplify it even further and hopefully I'm on the right track. Thanks HW Helper P: 6,185 That should be fine. I would just express 0.866 as √3/2 as that is an exact value whilst 0.866 is just an approximation. P: 18 that seems right to me. However I'd keep sin(π/3) as $\frac{\sqrt{3}}{2}$. Edit: I didn't see rock.freak667's reply. P: 42 ## Use standard identities to express thanks for the response :) Related Discussions Chemistry 1 Set Theory, Logic, Probability, Statistics 1 Calculus 9 Precalculus Mathematics Homework 2 Calculus & Beyond Homework 11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460870623588562, "perplexity": 1691.8171456377074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163044331/warc/CC-MAIN-20131204131724-00087-ip-10-33-133-15.ec2.internal.warc.gz"}