url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://brilliant.org/problems/atomic-spectra/ | # Atomic spectra
Chemistry Level 2
What would be the wavelength of the radiation emitted from an $$\text{Li}^{2+}$$ if the transition occurred from $$n=3$$ to $$n=2?$$
Hint: The transition from $$n=3$$ to $$n=2$$ in a neutral hydrogen atom has a wavelength of $$656.1$$ nm.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569894671440125, "perplexity": 281.0940213803192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424961.59/warc/CC-MAIN-20170725042318-20170725062318-00400.warc.gz"} |
http://math.stackexchange.com/questions/128341/proof-for-an-inequality?answertab=active | # Proof for an Inequality
Let $e^{e^x}=\sum\limits_{n\geq0}a_nx^n$, prove that
$$a_n\geq e(\gamma\log n)^{-n}$$
for $n\geq2$, where $\gamma$ is some constant great than $e$.
-
Can I ask which topics have recently been covered in your course or book? – bgins Apr 5 '12 at 21:42
I present three different approaches below, which may or may not appeal to you in helping you to formulate your own answer.
I think you can get this from the Taylor series at $0$ just by differentiating. If $y=y^{(0)}$ is your function and $y^{(n)}=yf_n(e^x)$, then $f_0(t)=1$, $y'=ye^x\implies f_1(t)=t$ and $y^{(n+1)}=y'f_n(e^x)+y\,e^xf_n'(e^x)$ $\implies$ $f_{n+1}(t)=f_1f_n+t\,f_n'$. Now $a_n=\frac{e\,f_n(1)}{n!}$ and $f_n(1)=\href{http://en.wikipedia.org/wiki/Bell_number}{B_n}$ (as @Autolatry points out) and it shouldn't be hard to show that this sequence is increasing and has (more than) the desired growth.
Upon further reflection, we want to show that for $n\ge2$, $$a_n \ge e \left( \gamma \, \log n \right)^{-n}$$ $$\left( \gamma \, \log n \right)^{-n} \le \frac{a_n}{e} = \frac{B_n}{n!}$$ $$\gamma \, \log n \ge \left( \frac{a_n}{e} \right)^{-1/n} = \left( \frac{n!}{B_n} \right)^{1/n}$$ $$\gamma \ge \sup \gamma_n \qquad\text{for}\qquad \gamma_n = \frac1{\log n} \left( \frac{a_n}{e} \right)^{-1/n} = \frac1{\log n}\left( \frac{n!}{B_n} \right)^{1/n}$$
Experiment suggests that $\gamma_n$ has a global minimum at $\gamma_{37}=0.56352\,15372\,44847$, lies below its pseudonym, the Euler-Mascheroni constant, $0.57721\,56649\,01532\cdots$, for $17\le n\le 114$, and may have its global maximum at $\gamma_2=1.44269\,50408\,88963$, depending on its asymptotic value. Using a recent bound (Berend-Tassa 2010) for $B_n$ and Stirling's formula for the factorial, $$\gamma_n \ge \frac{\log(n+1)}{0.792\,n\,\log n}\Bigl( n! \Bigr)^{1/n} \approx \frac{\log(n+1)}{0.792\,\,n\log n}\cdot\frac{n}{e}\cdot\left(2\pi n\right)^{1/2n} \rightarrow\frac1{0.792\,e}\approx0.46449\,,$$ so that $\gamma=\gamma_2=\sup_{n\ge2}\gamma_n$ is in fact the best constant we can choose.
If we start from $y=e^{e^x}=\sum_{n=0}^{\infty}a_n\,x^n$ and differentiate to get $\sum_{n=0}^{\infty}(n+1)a_{n+1}\,x^n=y'=e^x\cdot e^{e^x}=\left(\sum_{n=0}^{\infty}\frac{x^n}{n!}\right)\left(\sum_{n=0}^{\infty}a_n\,x^n\right)=\sum_{n=0}^{\infty}\left(\sum_{k=0}^n\frac{a_k}{(n-k)!}\right)x^n$, then we need to show (perhaps inductively) that the recursion $(n+1)a_{n+1}=\sum_{k=0}^n\frac{a_k}{(n-k)!}$ implies the desired inequality. Now $a_0=a_1=e\cdot1$ and the first few values we care about are $a_2=e\cdot\frac22$, $a_3=e\cdot\frac56$, $a_4=e\cdot\frac{15}{24}$, which as we already know satisfy our inequality for $\gamma\ge\gamma_1$ for our inductive base ($n=2$). As inductive hypothesis (with cumulative induction), we assume the inequality for $k\le n$: $$a_k \ge e \left( \gamma \, \log k \right)^{-k}$$ from which it follows by induction (and from the recursion) that \eqalign{ \frac{a_{n+1}}{e}\,\left(\gamma\log(n+1)\right)^{n+1} &\ge \frac{\left(\gamma\log(n+1)\right)^{n+1}}{n+1} \sum_{k=0}^n \frac{(\gamma\log k)^{-k}}{(n-k)!} \\ & > \frac{\left(\gamma\log(n+1)\right)^{n+1}}{n+1} \sum_{k=0}^n \frac{(\gamma\log n)^{-k}}{(n-k)!} \\ & > \frac{\gamma\log(n+1)}{n+1} \sum_{k=0}^n \frac{(\gamma\log n)^{n-k}}{(n-k)!} \\ & > \frac{\gamma\log(n+1)}{n+1} \sum_{k=0}^n \frac{(\gamma\log n)^k}{k!} \\ & \color{red}{>} \frac{\gamma\log(n+1)}{n+1} \left(e^{\gamma\log n} - \frac{(\gamma\log n)^{n+1}}{(n+1)!} \right) \\ & = \frac{\gamma\log(n+1)}{n+1} \left(n^\gamma - \frac{(\gamma\log n)^{n+1}}{(n+1)!} \right) \\ & \color{blue}{\ge 1 \qquad\text{(what we want!)}} } where the last inequality (in red) follows from the error theorem for Taylor series. On the subsequent line, the two terms in parentheses exhibit opposite asymptotic behavior: the former grows toward $\infty$ for all $\gamma > 1$, while the latter decays toward $0$. We need only show that the last expression is $\color{blue}{\ge1}$ for some fixed $\gamma$ and all $n\ge2$. Clearly, this is true asymptotically for all $\gamma > 1$ by the relative asymptotic growth of $\log n$ versus $n^{\gamma-1}$ (e.g. using L'Hopital's rule). I suspect a slight modification of this argument would yield the result more elegantly.
-
Due to the expansion of the function $e^{e^x}$, we know that $a_n=\frac{eB_n}{n!}$. – Riemann Apr 5 '12 at 13:51
Yes, of course. Thanks! – bgins Apr 5 '12 at 14:41
Hint:
$$e^{e^{x}} = e \sum_{k=0}^{\infty} \frac{x^{k}B_{k}}{k!}$$
Where $B_{k}$ is the $k$-th Bell number which is the number of partitions of a set with k entries, or the number of equivalence relations on it. Starting with $B_{0} = B_{1} = 1$, the first few Bell numbers are: $$1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, 115975 \ldots$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9988549947738647, "perplexity": 295.2146065507193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00199-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://wikieducator.org/Thread:Bullet_and_Number_Lists_(2) | # Bullet and Number Lists
Jump to: navigation, search
Hi
Can you specify which examples are confusing and which ones you would like to replace?
Thanks
19:31, 11 July 2009 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165754675865173, "perplexity": 3988.3910175784463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890582.77/warc/CC-MAIN-20180121120038-20180121140038-00009.warc.gz"} |
https://www.physicsforums.com/threads/complexification-of-a-vector-space.182966/ | # Complexification of a vector space
1. Sep 4, 2007
### quasar_4
Hello all,
I've just learned a bit about the complexification of a real vector space V to include scalar multiplication by complex numbers. A bit of confusion has ensued, which I am hoping someone can help me with conceptually: 1) how does one generate a basis for the new space Vc? It seems that one obtains the basis by somehow extending the basis for V, but I am very confused about this. In fact, I'm not exactly sure how vectors in the new space should be defined at all. :yuck:
2) does anyone know how one would prove that the dim(Vc)=dim(V)? I'm not asking for homework; I've just heard that this is the case, but haven't seen anything proved.
3) Under what circumstances would one want to complexify V? anyone have some good examples?
Thanks.
2. Sep 4, 2007
### dextercioby
1. If the vectors in the real space are C-linear independent, then they provide a basis for the complexfication.
2. This part is valid in the case the condition in 1. is satisfied.
3. Complexification of Lie algebras is a tool useful for representation theory.
3. Sep 4, 2007
### matt grime
I don't like the previous answer.
1. the bases are the same.
2. you mean dim_C of one and dim_R of the other - since, by 1, the basis sets are the same this is a vacuous question.
3. extension of scalars is in general an important concept, far beyond dexter's note. Please google for it.
4. Sep 5, 2007
### mathwonk
complexification makes all characteristic polynomials split.
5. Sep 18, 2007
### quasar_4
the basis thing is still a bit odd. According to my class if the basis for V is given by {e1,e2,...,en} then the basis for Vc is given by {(e1,0),(e2,0),...,(en,0)}. But then, I guess that is basically the same except that we've sneakily made the vectors into ordered pairs. Somehow this is now representative of VxV?
6. Sep 18, 2007
### mathwonk
the complexification of the space with real basis (1,0), (0,1), is the complex space with absis (1,0),(0,1).
i.e. the complexification of the space of ordered pairs of reals, is the space of ordered pairs of complexes.
or try this one: if V is a real vector space, then since C is also a real vector space, you can consider all real linear maps from V to C. this is a complex vector space since you can multiply the values by complex numbers.
or you could let V* be the real dual of V and consider all real linear maps V*-->C.
then given an element v of V, you could define a map v**:V*-->C by sending the real linear map t in V*, to the complex number t(v) = v**(t). this shows how to embed V into the new space HomR(V*,C). so we could regard HomR(V*,C) as a complex vector space which contains V, namely as the subspace of maps with images in R, i.e,. then V is the subspace HomR(V*,R) = V** = V (in finite dimensions only of course.)
then any real basis v1,...vn of V corresponds to a real basis of HomR(V*,R) and also a complex basis v1**,...,vn** of HomR(V*,C).
how do you like them apples? i made that up just for you.
Last edited: Sep 18, 2007
7. Sep 18, 2007
### mathwonk
so i guess i want my complexification of V to be a complex vector space W containing V as a real subspace and such that W equals V + iV, which is a direct sum decomposition of the real space underlying W, as a sum of two real subspaces.
it should follow that a subset of V is real independent if and only if it is complex independent as a subset of W, and spans V over R iff it spans W over C.
In particular, any real basis of V is a complex basis of W.
8. Sep 18, 2007
### mathwonk
i am getting no feedback on my post 6, of which i am somewhat proud. i.e. i have given you a construction that does not mention tensor products, hence is elementary.
9. Sep 18, 2007
### mathwonk
as stated above, after complexifying, there should be more eigenvectors.
i.e. the main point is not just complexifying the spaces but also the maps.
i.e. every real linear map of V-->V should become a complex linear map from the complexification to itself. and presumably with the same characteristic polynomial?
so a 90 degree rotation, becomes multiplication by i or -i.
presumably then one can transform the proof of diagionalizability of the complex operator into a classification of the real operators. i.e. we should be able to use the complex spectral theorem to deduce a real classification for orthogonal matrices.
i.e. just as in ode, the diff eq f''+f = 0 has compklex basis of solutions either e^(ix) or e^(-ix), but realsolutiions cosc and sinx.
i guess the complexification of the real solution space should be the complex solutiion space. and just as letting d act on sin, cos, rotates them into each other, letting it act on thiose exponentials, gives eigenfunctiions.
you are beginnin to make this seem interesting, once you combine it with ode, as all linear algebra should be!!!! but seldom is.
Last edited: Sep 18, 2007
10. Sep 18, 2007
### mathwonk
here is alittle easier sounding version of that construction. take any n objects and call them e1,...,en.
then define a vector space V as the set of all functions from the set {e1,...,en}-->R.
Then define the complexification of V to be the set of all functiuions from
{e1,...,en}-->C. notice that in both cases a basis consist of the n functions ei*, each taking exactly one of the ei to 1, and the others to 0.
thus the same set of n things is an R basis for V and a C basis for its compl.....tion.
but i regress, as it gets late. i will retire before i become rude again.
11. Sep 20, 2007
### mathwonk
notice that the complexification of a real space has more structure than a general complex space, i.e. it also has a real conjugation operator on it, defined by composing the functioins above by complex conjugation.
or if you like by writing each vector as an element of V + iV, and using the identity on the first summand and the minus map on the second one.
12. Sep 20, 2007
### mathwonk
13. Sep 27, 2007
### quasar_4
wow - it's taken a few days for my brain to absorb all of this. It is making more sense now though. So, should one care to take a coordinate vector [x] on the basis B for V and rewrite [x] on the basis B' for the complexified space, why would the components of [x] change at all? wouldn't our change of basis matrix just be the identity?
14. Sep 27, 2007
### quasar_4
in fact, that seems like a trivial question. There isn't effectively a "change" of basis at all, the only thing happening is that you'd have to rewrite your vectors as an ordered pair (right?) so [x]={x1, x2,...,xn} rewritten for the basis for W becomes [x]={(x1,0),(x2,0),...,(xn,0)}. Is that correct?
15. Sep 27, 2007
### quasar_4
oh, and, I liked the ode stuff. To be honest, no one's even mentioned odes to me since about 3 years ago. I wish they did tie them more into linear algebra. It would make for a lovely depth of understanding on my diff eqs.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Threads - Complexification vector space Date
B Build an affine/vector space for physics Wednesday at 3:58 PM
I Understanding Hilbert Vector Spaces Mar 2, 2018
I Can we construct a Lie algebra from the squares of SU(1,1) Feb 24, 2018
I The vector triple product Feb 15, 2018 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088547825813293, "perplexity": 953.0642320094867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00642.warc.gz"} |
https://www.physicsforums.com/threads/question-about-molecular-orbital-theory.595622/ | # Question about molecular orbital theory
1. Apr 11, 2012
### reyrey389
I know that the sigma 2p bonding orbital could be less/higher in energy than the pi 2p bonding (based on if it is C2,N2,B2 etc), but
Why is the sigma 2p antibonding orbital always higher in energy than the pi 2p antibonding one?
2. Apr 12, 2012
### DrDu
When you neglect in a first step the s orbitals, the sigma bonding p orbital would be below the pi bonding p orbitals. However, in the second step the sigma bonding p orbital mixes with the sigma bonding s orbitals which shifts them up in energy. Also the antibonding p orbital can be shifted up by that mechanism, although the effect is weaker due to their larger energetic separation and it would not change the ordering of the orbitals.
Similar Discussions: Question about molecular orbital theory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9177955985069275, "perplexity": 2653.671901547887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608877.60/warc/CC-MAIN-20170527075209-20170527095209-00077.warc.gz"} |
http://ptsymmetry.net/?p=1804&utm_source=rss&utm_medium=rss&utm_campaign=pt-non-pt-symmetric-and-non-hermitian-hellmann-potential-approximate-bound-and-scattering-states-with-any-ell-values | ## PT-/non-PT-Symmetric and non-Hermitian Hellmann Potential: Approximate Bound and Scattering States with Any $$\ell$$-Values
Altug Arda, Ramazan Sever
We investigate the approximate bound state solutions of the Schrodinger equation for the PT-/non-PT-symmetric and non Hermitian Hellmann potential. Exact energy eigenvalues and corresponding normalized wave functions are obtained. Numerical values of energy eigenvalues for the bound states are compared with the ones obtained before. Scattering state solutions are also studied. Phase shifts of the potential are written in terms of the angular momentum quantum number $$\ell$$.
http://arxiv.org/abs/1409.0518
Quantum Physics (quant-ph) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982624650001526, "perplexity": 1017.912285123254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00037.warc.gz"} |
http://www.cds.caltech.edu/~murray/wiki/index.php?title=CDS_140b_Spring_2014_Homework_1&oldid=17018 | # CDS 140b Spring 2014 Homework 1
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
R. Murray, D. MacMartin Issued: 2 Apr 2014 (Wed) CDS 140b, Spring 2014 Due: 9 Apr 2014 (Wed)
__MATHJAX__
Note: In the upper left hand corner of the second page of your homework set, please put the number of hours that you spent on this homework set (including reading).
1. Perko, Section 2.14, problem 1
(a) Show that the system
<amsmath>\aligned
\dot x&=a_{11}x+a_{12}y+Ax^2-2Bxy+Cy^2\\ \dot y&=a_{21}x-a_{11}y+Dx^2-2Axy+By^2
\endaligned</amsmath>
is a Hamiltonian system with one degree of freedom; i.e., find the Hamiltonian function $H(x,y)$ for this system.
(b) Given $f\in C^2(E)$, where $E$ is an open, simply connected subset of $\mathbb R^2$, show that the system $\dot{x}=f(x)$ is a Hamiltonian system on $E$ iff $\nabla\cdot f(x)=0$ for all $x\in E$.
2. Perko, Section 2.14, problem 7. Show that if $x_0$ is a strict local minimum of $V(x)$ then the function $V(x)-V(x_0)$ is a strict Lyapunov function (i.e., $\dot{V}<0$ for $x\neq 0$) for the gradient system $\dot x=-\mathrm{grad}V(x)$.
3. Perko, Section 2.14, problem 12. Show that the flow defined by a Hamiltonian system with one degree of freedom is area preserving. Hint: Cf. Problem 6 in Section 2.3
4. A planar pendulum (in the $x$-$z$ plane) of mass $m$ and length $\ell$ hangs from a support point that moves according to $x=a\cos (\omega t)$. Find the Lagrangian, the Hamiltonian, and write the first-order equations of motion for the pendulum. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546085596084595, "perplexity": 473.33867239544287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00284.warc.gz"} |
http://mathhelpforum.com/calculus/41199-curvature.html | # Math Help - curvature
1. ## curvature
find the curvature K of the curve....
r(t) = 4t i + 3 cos t j + 3 sin t k
i am confused after i find r'(t)...
2. Originally Posted by chris25
find the curvature K of the curve....
r(t) = 4t i + 3 cos t j + 3 sin t k
i am confused after i find r'(t)...
Here is the formula
$\kappa=\frac{|\frac{dr}{dt} \times (\frac{dr}{dt})^2|}{|\frac{dr}{dt}|^3}$
I have to go to work. I will finish later if no one else has.
Good luck.
3. Originally Posted by TheEmptySet
Here is the formula
$\kappa=\frac{|\frac{dr}{dt} \times \color{red}(\frac{dr}{dt})^2\color{black}|}{|\frac {dr}{dt}|^3}$
I have to go to work. I will finish later if no one else has.
Good luck.
As far as I know, it's $\kappa = \frac{\left | \frac{dr}{dt} \times \frac{d^2r}{dt^2}\right |}{\left | \frac{dr}{dt} \right |^3} = \frac{\left | \dot {r} \times \ddot {r} \right |}{\left | r \right |^3}$
See here if you wonder how to get the formula:
Curvature -- from Wolfram MathWorld
4. Originally Posted by wingless
As far as I know, it's $\kappa = \frac{\left | \frac{dr}{dt} \times \frac{d^2r}{dt^2}\right |}{\left | \frac{dr}{dt} \right |^3} = \frac{\left | \dot {r} \times \ddot {r} \right |}{\left | r \right |^3}$
See here if you wonder how to get the formula:
Curvature -- from Wolfram MathWorld
@ wingless You are correct Thanks that is what I get for trying to do it in a hurry.
$r(t)=<4t,3\cos(t),3\sin(t)$
$\frac{dr}{dt}=<4,-3\sin(t),3\cos(t)>$
$\frac{d^2r}{dt^2}=<0,-3\cos(t),-3\sin(t)>$
Now we take the cross product to get
$\begin{vmatrix}
i & j & k \\
4 & -3\sin(t) & 3\cos(t) \\
0 & -3\cos(t) & -3\sin(t) \\
\end{vmatrix}=(9\sin^2(t)+9\cos(t))\vec i-(-12\sin(t)-0)\vec j+(-12\cos(t)-0)\vec k$
Simplifying and taking the magnitude we get
$=|<9,12\sin(t),-12\cos(t)>|=\sqrt{(9)^2+(12\sin(t))^2+(-12\cos(t))^2}$
$=\sqrt{81+144(\sin^{2}(t)+\cos^{2}(t))}=\sqrt{225} =15$
Now we need the magnitude of
$\bigg| \frac{dr}{dt}\bigg|=\sqrt{4^2+(-3\sin(t))^2+(3\cos(t))^2}=\sqrt{16+9(\sin^2(t)+\co s^2(t))}=\sqrt{25}=5$
Finally we get
$\kappa=\frac{15}{5^3}=\frac{3}{25}$
Yeah. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703475832939148, "perplexity": 1263.3402955932784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021758120/warc/CC-MAIN-20140305121558-00027-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/probability-question-on-school-classes.404227/ | # Homework Help: Probability question on school classes
1. May 19, 2010
1. The problem statement, all variables and given/known data
In a high-school graduating class of 100 students, 54 studied math, 69 studied history, and 35 studied both math and history. If 1 of the students is selected at random, find the probability that
(a) the student took math or history;
(b) the student did not take either of these subjects;
(c) the student took history but not math.
2. Relevant equations
P = n/N
3. The attempt at a solution
Ok. I am thinking that I actually need to figure out how many students took math only and history only (pretty sure this is just algebra).
So I know that there are 54 math students; this must include those who studied both. Thus, the number of students who studied *math only* is 54 - 35 = 19.
Similarly, those who took History only 69 - 35 = 34.
So for (a) P(M U H) = (19 + 34) / 100 = 53/100 ... but this is wrong. Book says 22/25. So I am off to a bad start. What am I screwing up here?
2. May 19, 2010
### gabbagabbahey
"or" is an inclusive word, so you also need to include the 35 who took both math and history
3. May 19, 2010
I guess an alternative approach to this would be
$P(M\cup H) = P(M) + P(H) - P(M\cap H)$
where M is the math set, H is the History set, etc.
Just curious as to why my first attempt fails?
EDIT:
I see. I was wondering about that and had somehow convinced myself that it was exclusive. In general, is "or" inclusive in probability? How about math in general?
4. May 19, 2010
### gabbagabbahey
Or is inclusive in math, probability and computer science...The only instance where "or" is exclusive , that comes to mind, is in common everyday conversational usage.
5. May 19, 2010
Hence why my waitress never says, "that comes with soup or salad or both." | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233994841575623, "perplexity": 1141.2150231527544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829399.59/warc/CC-MAIN-20181218123521-20181218145521-00542.warc.gz"} |
https://brilliant.org/problems/5-real/ | # 5 real
Algebra Level 3
I have 5 real numbers whose product is non-zero. Now, I increase each of the 5 numbers by 1 and again multiply all of them. Is it possible that this new product is the same as the non-zero number obtained earlier?
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936030924320221, "perplexity": 268.36100808015624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00208.warc.gz"} |
https://math.stackexchange.com/questions/585176/splitting-integrand/585185 | # Splitting integrand
Consider the integral:
$\int \frac{1}{(x+3)(5+2x)}$
My teacher splits this first into two unknown fractions with two unknown numerators, namely:
$\frac{A}{(x+3)}+\frac {B}{(5+2x)}$
He then goes on to perform some sort of magic to find that A = -1 and B = 2
Knowing that my teacher is indeed NOT a magician, I come to you for assistance.
What is my teacher doing and why does it work? Thank you.
• Partial fractions – user61527 Nov 29 '13 at 1:07
• He's a mathemagician.... – Eleven-Eleven Nov 29 '13 at 1:14
• @ChristopherErnst Apparently, my professor had that written on one of his teacher evaluation forms. :) – apnorton Nov 29 '13 at 15:45
• When my students know how to do a step relatively well and I want to skip the step I wave my hands magically and they all sigh at me.... :) – Eleven-Eleven Nov 29 '13 at 15:55
If you get a common denominator of $(x+3)(5+2x)$, then the numerators must be equal. Thus, $$1=A(5+2x)+B(x+3)$$ $$1=5A+2Ax+Bx+3B$$ $$0x+1=(2A+B)x+(5A+3B)$$ This means that $$2A+B=0$$ $$5A+3B=1$$ Solve for A and B using substitution or whatever method you prefer. Now you can solve the integral.
Because $\frac{A}{(x+3)}+\frac {B}{(5+2x)} = \frac{A(5+2x)+B(x+3)}{(5+2x)(x+3)} = \frac{(2A+B)x+(5A+3B)}{(5+2x)(x+3)}=\frac{1}{(5+2x)(x+3)}$, this would mean that $2A+B=0$ and $5A+3B=1$. Solving these equations gives $A=-1$ and $B=2$.
You would want to do the following $$A/(x+3)+B/(5+2x) =1/[(x+3)(5+2x)]$$ $$A(5+2x) + B(x+3) = 1$$ Essentially you've multiply both by linear factors of your initial integrand. Equate the coefficients of x and 1.
$$2A+B=0$$ $$5A+3B=1$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577072024345398, "perplexity": 407.6007653898008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00082.warc.gz"} |
https://share.cocalc.com/share/5d54f9d642cd3ef1affd88397ab0db616c17e5e0/www/talks/padic_heights/pheight.tex?viewer=share | CoCalc Public Fileswww / talks / padic_heights / pheight.tex
Author: William A. Stein
1\documentclass[11pt]{article}
2\hoffset=-0.06\textwidth
3\textwidth=1.12\textwidth
4\voffset=-0.06\textheight
5\textheight=1.12\textheight
6\bibliographystyle{amsalpha}
7\include{macros}
8\renewcommand{\set}{\leftarrow}
9\title{An Algorithm for Computing $p$-Adic Heights Using Monsky-Washnitzer Cohomology}
10\date{Notes for a Talk at MIT on 2004-10-15}
11\author{William Stein}
12\renewcommand{\E}{\mathbb{E}}
13\begin{document}
14\maketitle
15%\tableofcontents
16
17Let $E$ be an elliptic curve over~$\Q$ and suppose
18$$P=(x,y)=\left(\frac{a}{d^2},\frac{b}{d^3}\right)\in E(\Q),$$ with $a,b,d\in\Z$ and
19$\gcd(a,d)=\gcd(b,d)=1$. The
20\defn{naive height} of $P$ is
21$$\tilde{h}(P) = \log\max\{|a|,d^2\},$$
22and the \defn{canonical height} of $P$ is
23$$24 h(P) = \lim_{n\to\infty} \frac{h(2^n P)}{4^n}. 25$$
26This definition is not good for computation, because
27$2^n P$ gets huge very quickly, and computing
28$2^n P$ exactly, for~$n$ large, is not reasonable.
29%The canonical height is quadratic, in the sense that
30%$h(mP) = m^2 h(P)$ for all integer $m$.
31
32In \cite[\S3.4]{cremona:algs}, Cremona describes an efficient method
33(due mostly to Silverman) for computing $h(P)$. One defines
34\defn{local heights} $\hat{h}_p:E(\Q)\to\R$, for all primes $p$, and
35$\hat{h}_\infty:E(\Q)\to\R$ such that $$h(P) = \hat{h}_\infty(P) + 36\sum \hat{h}_p(P).$$
37The local heights $\hat{h}_p(P)$ are easy to
38compute explicitly. For example, when $p$ is a prime of good
39reduction, $\hat{h}_p(P) = \max\{0,-\ord_p(x)\}\cdot \log(p)$.
40
41{\em This paper is {\bf NOT} about local heights $\hat{h}_p$, and we will
42not mention them any further.} Instead, this paper is about a canonical global
43$p$-adic height function
44$$h_p : E(\Q)\to\Q_p.$$
45These height functions are genuine height functions; e.g., $h_p$
46is a quadratic function, i.e, $h_p(mP) = m^2 h(P)$ for all~$m$.
47They appear when defining the $p$-adic regulators that appear in the
48Mazur-Tate $p$-adic analogues of the Birch and Swinnerton-Dyer
49conjecture.
50
51\vspace{3ex}
52\noindent{\bf Acknowledgement:} Barry Mazur, John Tate, Mike Harrison,
53Christian Wuthrich, Nick Katz.
54
55\section{The $p$-Adic Height Pairing}
56Let $E$ be an elliptic curve over~$\Q$ and suppose $p\geq 5$ is a prime
57such that $E$ has good ordinary reduction at $p$. Suppose $P\in E(\Q)$
58is a point that reduces to $0\in E(\F_p)$ and to the connected
59component of $\mathcal{E}_{\F_\ell}$ at all bad primes $\ell$.
60We will define functions $\log_p$, $\sigma$, and $d$ below.
61In terms of these functions, the $p$-adic height of $P$ is
62\begin{equation}\label{eqn:heightdef}
63 h_p(P) = \frac{1}{p}\cdot \log_p\left(\frac{\sigma(P)}{d(P)}\right) \in \Q_p.
64\end{equation}
65The function $h_p$ satisfies $h_p(nP) = n^2 h_p(P)$ for all integers~$n$,
66so it naturally extends to a function on the full Mordell-Weil group $E(\Q)$.
67Setting $$\langle P, Q\rangle = \frac{1}{2}\cdot (h_p(P+Q)-h_p(P)-h_p(Q)),$$
68we obtain a {\em nondegenerate} pairing on
69$E(\Q)_{/\tor}$, and the $p$-adic regulator is the discriminant
70of this pairing (which is well defined up to sign).
71
72Investigations into $p$-adic Birch and Swinnerton-Dyer conjectures for
74pairings, which motivate our interest in computing it to
75high precision.
76
77We now define each of the undefined quantities in
78(\ref{eqn:heightdef}). The function $\log_p:\Q_p^* \to \Q_p$ is the
79unique homomorphism with $\log_p(p)=1$ that extends the homomorphism
80$\log_p:1+p\Z_p \to \Q_p$ defined by the usual power series of $\log(x)$ about $1$. Thus
81if $x\in\Q_p^*$, we can compute $\log_p(x)$ using the formula
82$$\log_p(x) = \frac{1}{p-1}\cdot \log_p(u^{p-1}),$$
83where $u = 84p^{-\ord_p(x)} \cdot x$.
85
86The denominator $d(P)$ is the square root of the denominator of the
87$x$-coordinate of $P$.
88
89The $\sigma$ function is the most mysterious quantity in
90(\ref{eqn:heightdef}), and it turns out the mystery is closely related
91to the difficulty of computing the $p$-adic number $\E_2(E,\omega)$,
92where $\E_2$ is the $p$-adic weight $2$ Eisenstein series. There are
93{\em many} ways to define or characterize $\sigma$, e.g.,
94\cite{mazur-tate:sigma} contains $11$ different characterizations!
95Let $$x(t) = \frac{1}{t^2} + \cdots \in \Z((t))$$
96be the formal power
97series that expresses $x$ in terms of $t=-x/y$ locally near $0\in E$.
98Then Mazur and Tate prove there is exactly one function $\sigma(t)\in 99t\Z_p[[t]]$ and constant $c\in \Q_p$ that satisfy the equation
101x(t)
102+ c = -\frac{d}{\omega}\left( \frac{1}{\sigma}
103 \frac{d\sigma}{\omega}\right).
104\end{equation}
105This defines $\sigma$, and,
106unwinding the meaning of the expression on the right, it leads to an
107algorithm to compute $\sigma(t)$ to any desired precision,
108which we now sketch.
109
110If we expand (\ref{eqn:sigmadef}), we can view $c$ as a formal
111variable and solve for $\sigma(t)$ as a power series with coefficients
112that are polynomials in $c$. Each coefficient of $\sigma(t)$ must be
113in $\Z_p$, so when there are denominators in the polynomials in $c$,
114we obtain conditions on $c$ modulo powers of $p$. Taking these
115together for many coefficients yields enough scraps of information to
116get $c\pmod{p^n}$, for some small $n$, hence $\sigma(t) \pmod{p^n}$.
117However, this algorithm is {\em extremely inefficient} and its
118complexity is unclear (how many coefficients are needed to compute $c$
119to precision $p^n$?).
120
121For the last 15 or 20 years, the above unsatisifactory algorithm has
122been the standard one for computing $p$-adic heights, e.g., when
123investigating $p$-adic analogues of the BSD conjecture.
124\begin{center}
125{\em The situation
126 changed a few weeks ago...}
127\end{center}
128\section{Using Cohomology to Compute $\sigma$}
129Suppose that $E$ is an elliptic curve over $\Q$ given by a Weierstrass equation
130$$131y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6. 132$$
133Let $x(t)$ be the formal series as before, and set
134$$\wp(t) = x(t) + (a_1^2 + 4a_2)/12\in\Q((t)).$$
135(The function $\wp$ satisfies $(\wp')^2 = 4\wp^3 - g_2 \wp - g_3$, etc.; it's
136the formal analogue of the usual complex $\wp$-function.)
137In \cite{mazur-tate:sigma}, Mazur and Tate prove that
138$$139 x(t) + c = \wp(t) + \frac{1}{12}\cdot \E_2(E,\omega), 140$$
141where $\E_2(E,\omega)$ is the value of the Katz $p$-adic
142weight~$2$ Eisenstein series at $(E,\omega)$, and the equality is of
143elements of $\Q_p((t))$. Thus computing $c$ is equivalent
144to computing $\E_2(E,\omega)$.
145
146This summer, Mazur, Tate, and I explored many ideas for computing
147$\E_2(E,\omega)$. Though each was interesting and promising, nothing
148led to a better algorithm that just computing $c$ as sketched above.
149Perhaps the difficult of computing $\E_2(E,\omega)$ is somehow at the
150heart of the theory?
151
152Barry wrote to Nick Katz, who fired off the following email:
153
154\subsection{Katz's Email}
155\begin{verbatim}
156Date: Thu, 8 Jul 2004 13:53:13 -0400
157From: Nick Katz <[email protected]>
158Subject: Re: convergence of the Eisenstein series of weight two
159To: [email protected], [email protected]
160Cc: [email protected], [email protected]
161
162It seems to me you want to use the interpretation of P as the
163"direction of the unit root subspace", that should make it fast to
164compute. Concretely, suppose we have a pair (E, \omega) over Z_p, and
165to fix ideas p is not 2 or 3. Then we write a Weierstrass eqn for E,
166y^2 = 4x^3 - g_2x - g_3, so that \omega is dx/y, and we denote by \eta
167the differential xdx/y. Then \omega and \eta form a Z_p basis of
168H^1_DR = H^1_cris, and the key step is to compute the matrix of
169absolute Frobenius (here Z_p linear, the advantage of working over
170Z_p: otherwise if over Witt vectors of an F_q, only \sigma-linear).
171[This calculation goes fast, because the matrix of Frobenius lives
172over the entire p-adic moduli space, and we are back in the glory days
173of Washnitzer-Monsky cohomology (of the open curve E - {origin}).]
174
175 Okay, now suppose we have computed the matrix of Frob in the
176basis \omega, \eta. The unit root subspace is a direct factor, call
177it U, of the H^1, and we know that a complimentary direct factor is
178Fil^1 := the Z_p span of \omega. We also know that Frob(\omega) lies
179in pH^1, and this tells us that, mod p^n, U is the span of
180Frob^n(\eta). What this means concretely is that if we write,
181for each n,
182
183 Frob^n(\eta) = a_n\omega + b_n\eta,
184
185then b_n is a unit (cong mod p to the n'th power of the Hasse
186invariant) and that P is something like the ratio a_n/b_n (up to a
187sign and a factor 12 which i don't recall offhand but which is in my
188Antwerp appendix and also in my "p-adic interp. of real
189anal. Eis. series" paper).
190
191 So in terms of speed of convergence, ONCE you have Frob, you
192have to iterate it n times to calculate P mod p^n. Best, Nick
193\end{verbatim}
194
195\subsection{The Algorithms}
196The following algorithms culminate in an algorithm for computing
197$h_p(P)$ that incorporates Katz's ideas with the discussion elsewhere
198in this paper. I have computed $\sigma$ and $h_p$ in numerous
199cases using the algorithm described below, and using my
200implementations of the integrality'' algorithm described above
201and also Wuthrich's algorithm, and the results match. The analysis
202of some of the necessary precision is not complete. I also have
203not analyzed the complexity.
204
205The first algorithm computes $\E_2(E,\omega)$.
206\begin{algorithm}{Evaluation of $\E_2(E,\omega)$}\label{alg:e2}
207 Given an elliptic curve over~$\Q$ and prime~$p$, this algorithm
208 computes $\E_2(E,\omega)\in \Q_p$ (to precision $O(p^n)$ say) . We
209 assume that Kedlaya's algorithm is available for computing a
210 presentation of the $p$-adic Monsky-Washnitzer cohomology of
211 $E-\{(0,0)\}$ with Frobenius action.
212\begin{steps}
213\item Let $c_4$ and $c_6$ be the $c$-invariants of a minimal model
214of~$E$. Set
215$$a_4\set -\frac{c_4}{2^4\cdot 3}\qquad\text{and}\qquad 216a_6 \set -\frac{c_6}{2^5\cdot 3^3}.$$
217\item Apply Kedlaya's algorithm to the hyperelliptic curve
218$y^2=x^3 + a_4x + a_6$ (which is isomorphic to $E$) to obtain the matrix
219$M$ of the action of absolute Frobenius
220on the basis
221$$\omega=\frac{dx}{y}, \qquad \eta=\frac{xdx}{y}$$
222to precision $O(p^n)$. (We view $M$ as acting
223from the left.)
224\item
225We know $M$ to precision $O(p^n)$.
226Compute the $n$th power of $M$ and let
227$\vtwo{a}{b}$ be the second column of $M^n$.
228Then $\Frob^n(\eta) = a\omega + b\eta$
229
230\item Output $M$ and $-12a/b$ (which is $\E_2(E,\omega)$), then terminate.
231\end{steps}
232\end{algorithm}
233
234The next algorithm uses the above algorithm to compute $\sigma(t)$.
235\begin{algorithm}{The Canonical $p$-adic Sigma Function}\label{alg:sigma}
236 Given an elliptic curve~$E$ and a good ordinary prime~$p$, this
237 algorithm computes $\sigma(t)\in\Z_p[[t]]$ modulo $(p^n, t^m)$ for
238 any given positive integers $n,m$. (I have {\em not} figured out
239 exactly what precision each object below must be computed to.)
240\begin{steps}
241\item Using Algorithm~\ref{alg:e2}, compute $e_2 = \E_2(E,\omega)\in 242 \Z_p$ to precision $O(p^n)$.
243\item Compute the formal power series $x = x(t) \in \Q[[t]]$
244 associated to the formal group of $E$ to precision $O(t^m)$.
245\item Compute the formal logarithm $z(t) \in \Q((t))$ to precision
246$O(t^m)$ using that $\ds z(t) = \int \frac{dx/dt}{(2y(t)+a_1x(t) + a_3)},$
247where $x(t)=t/w(t)$ and $y(t)=-1/w(t)$ are the formal $x$
248and $y$ functions, and $w(t)$ is given by the explicit inductive
249formula in \cite[Ch.~7]{silverman:aec}. (Here $t=-x/y$ and $w=-1/y$ and
250we can write $w$ as a series in $t$.)
251\item Using a power series reversion'' (functional inverse)
252 algorithm (see e.g., Mathworld), find the power series $F(z)\in\Q[[z]]$ such that
253 $t=F(z)$. Here $F$ is the reversion of $z$, which exists because
254 $z(t) = t + \cdots$.
255\item Set $\wp(t) \set x(t) + (a_1^2 + 4a_2)/12 \in \Q[[t]]$ (to precision
256$O(t^m)$), where the
257 $a_i$ are the coefficients of the Weierstrass equation of $E$.
258Then compute the series $\wp(z) = \wp(F(z))\in \Q((z))$.
259\item Set $\ds g(z)\set \frac{1}{z^2} - \wp(z) + \frac{e_2}{12}\in\Q_p((z))$.
260(Note:
261 The theory suggests the last term should be $-e_2/12$ but the calculations do not
262 work unless I use the above formula. Maybe there are two
263 normalizations of $E_2$ in the literature?)
264\item Set
265$\ds \sigma(z) \set z\cdot \exp\left(\int \int g(z) \dz \dz\right) 266\in \Q_p[[z]]$.
267\item Set $\sigma(t) \set \sigma(z(t))\in t\cdot \Z_p[[t]]$, where $z(t)$
268is the formal logarithm computed above. Output $\sigma(t)$
269and terminate.
270\end{steps}
271\end{algorithm}
272
273\begin{remark}
274 The trick of changing from $\wp(t)$ to $\wp(z)$ is essential so that
275 we can solve a certain differential equation using just operations
276 with power series.
277\end{remark}
278
279The final algorithm uses $\sigma(t)$ to compute the $p$-adic height.
280\begin{algorithm}{$p$-adic Height}
281Given an elliptic curve~$E$ over $\Q$, a good ordinary prime~$p$,
282and an element $P\in E(\Q)$, this algorithm computes the
283$p$-adic height $h_p(P) \in \Q_p$ to precision $O(p^n)$.
284(I will ignore the precision below, though this must be not
285be ignored for the final version of this paper.)
286\begin{steps}
287\item{}[Prepare Point] Compute an integer $m$ such that
288$mP$ reduces to $0\in E(\F_p)$ and to the connected
289component of $\mathcal{E}_{\F_\ell}$ at all bad primes $\ell$.
290For example,~$m$ could be the least common multiple of the Tamagawa numbers
291of $E$ and $\#E(\F_p)$. Set $Q\set mP$ and write $Q=(x,y)$.
292\item{}[Denominator] Let $d$ be the positive integer square root of the
293denominator of $x$.
294\item{}[Compute $\sigma$] Compute $\sigma(t)$ using
295 Algorithm~\ref{alg:sigma}, and set $s \set \sigma(-x/y) \in \Q_p$.
296\item{}[Logs] Compute
297$\ds h_p(Q) \set \frac{1}{p}\log_p\left(\frac{s}{d}\right)$, and
298$\ds h_p(P) \set \frac{1}{m^2} \cdot h_p(Q)$. Output $h_p(P)$ and terminate.
299\end{steps}
300\end{algorithm}
301
302\section{Future Directions}
303
304Suppose $E_t$ is an elliptic curves over $\Q(t)$. It might be
305extremely interesting to obtain formula for $\E_2(E_t)$ as something
306like (?) a power series in $\Q_p[[t]]$. This might shed light on the
307analytic behavior of the $p$-adic modular form $\E_2$, and on Tate's
308recent surprising experimental observations about the behavior of the
309$(1/j)$-expansion of~$\E_2 E_4/E_6$.
310
311It would also be interesting to do yet more computations in support of
312$p$-adic analogues of the BSD conjectures of \cite{mtt}, especially
313when $E/\Q$ has large rank. Substantial theoretical work has been
314done toward these $p$-adic conjectures, and this work may be useful to
315algorithms for computing information about Shafarevich-Tate and Selmer
316groups of elliptic curves. For example, in \cite{pr:exp}, Perrin-Riou
317uses her results about the $p$-adic BSD conjecture in the
318supersingular case to prove that $\Sha(E/\Q)[p]=0$ for certain~$p$ and
319elliptic curves~$E$ of rank $>1$, for which the work of Kolyvagin and
320Kato does not apply. Mazur and Rubin (with my computational input)
321are also obtaining results that could be viewed as fitting into this
322program.
323
324I would like to optimize the implementation of the algorithm.
325Probably the most time-consuming step is computation of
326$\E_2(E,\omega)$ using Kedlaya's algorithm. My current implementation
327uses Michael Harrison's implementation of Kedlaya's algorithm for
328$y^2=f(x)$, with $f(x)$ of arbitrary degree. Perhaps implementing
329just what is needed for $y^2=x^3+ax+b$ might be more efficient. Also,
330Harrison tells me his implementation isn't nearly as optimized as it
331might be.
332
333It might be possible to compute $p$-adic heights on Jacobians of
334hyperelliptic curves.
335
336Formulate everything above over number fields, and extend to the case
338
339Supersingular reduction?
340
341\section{Examples}
342The purpose of this section is to show you how to use the MAGMA package
343I wrote for computing with $p$-adic heights, and give you a sense
344for how fast it is.
345
346\begin{verbatim}
347> function EC(s) return EllipticCurve(CremonaDatabase(),s); end function;
348> E := EC("37A");
350> P := good_ordinary_primes(E,100); P;
351[ 5, 7, 11, 13, 23, 29, 31, 41, 43, 47, 53, 59, 61, 67, 71, 73,
35279, 83, 89, 97 ]
353> for p in P do time print p, regulator(E,p,10); end for;
3545 22229672 + O(5^11)
355Time: 0.040
3567 317628041 + O(7^11)
357...
35889 15480467821870438719 + O(89^10)
359Time: 1.190
36097 -11195795337175141289 + O(97^10)
361Time: 1.490
362
363> E := EC("389A");
364> P := good_ordinary_primes(E,100); P;
365[ 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61,
36667, 71, 73, 79, 83, 89, 97 ]
367> for p in P do time print p, regulator(E,p,10); end for;
3685 -3871266 + O(5^11)
369Time: 0.260
3707 483898350 + O(7^11)
371...
37289 9775723521676164462 + O(89^10)
373Time: 1.330
37497 -13688331881071698338 + O(97^10)
375Time: 1.820
376
377> E := EC("5077A");
378> P := good_ordinary_primes(E,100); P;
379[ 5, 7, 11, 13, 17, 19, 23, 29, 31, 43, 47, 53, 59, 61, 67, 71,
38073, 79, 83, 89, 97 ]
381> for p in P do time print p, regulator(E,p,10); end for;
3825 655268*5^-2 + O(5^7)
383Time: 0.800
3847 -933185758 + O(7^11)
385...
38689 -3325438607428779200 + O(89^10)
387Time: 1.910
38897 -5353586908063282167 + O(97^10)
389Time: 2.010
390
391--------
392
393> E := EC("37A");
394> time regulator(E,5,50);
395115299522541340178416234094637464047 + O(5^51)
396Time: 1.860
397> Valuation(115299522541340178416234094637464047 - 22229672,5);
3989
399
400> time regulator(E,97,50);
401-5019271523950156862996295340254565181870308222348277984940964806\
402 97957622583267105973403430183075091 + O(97^50)
403Time: 31.7
404\end{verbatim}
405
406
407\bibliography{biblio}
408\end{document}
409
410
411
412
413
414 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475305676460266, "perplexity": 2724.5366727413557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00308.warc.gz"} |
http://www.cut-the-knot.org/arithmetic/combinatorics/PascalTriangleProperties.shtml | # Patterns in Pascal's Triangle
Pascal's Triangle conceals a huge number of various patterns, many discovered by Pascal himself and even known before his time.
#### Pascal's Triangle is symmetric
In terms of the binomial coefficients, $C^{n}_{m} = C^{n}_{n-m}.$ This follows from the formula for the binomial coefficient
$\displaystyle C^{n}_{m}=\frac{n!}{m!(n-m)!}.$
It is also implied by the construction of the triangle, i.e., by the interpretation of the entries as the number of ways to get from the top to a given spot in the triangle.
Some authors even considered a symmetric notation (in analogy with trinomial coefficients)
$\displaystyle C^{n}_{m}={n \choose m\space\space s}$
where $s = n - m.$
#### The sum of entries in row $n$ equals $2^{n}$
This is Pascal's Corollary 8 and can be proved by induction. The main point in the argument is that each entry in row $n,$ say $C^{n}_{k}$ is added to two entries below: once to form $C^{n + 1}_{k}$ and once to form $C^{n + 1}_{k+1}$ which follows from Pascal's Identity:
$C^{n + 1}_{k} = C^{n}_{k - 1} + C^{n}_{k},$
$C^{n + 1}_{k+1} = C^{n}_{k} + C^{n}_{k+1}.$
For this reason, the sum of entries in row $n + 1$ is twice the sum of entries in row $n.$ (This is Pascal's Corollary 7.)
As a consequence, we have Pascal's Corollary 9: In every arithmetical triangle each base exceeds by unity the sum of all the preceding bases. In other words, $2^{n} - 1 = 2^{n-1} + 2^{n-2} + ... + 1.$
#### There are well known sequences of numbers
Some of those sequences are better observed when the numbers are arranged in Pascal's form where because of the symmetry, the rows and columns are interchangeable.
The first row contains only $1$s: $1, 1, 1, 1, \ldots$
The second row consists of all counting numbers: $1, 2, 3, 4, \ldots$
The third row consists of the triangular numbers: $1, 3, 6, 10, \ldots$
The fourth row consists of tetrahedral numbers: $1, 4, 10, 20, 35, \ldots$
The fifth row contains the pentatope numbers: $1, 5, 15, 35, 70, \ldots$
"Pentatope" is a recent term. Regarding the fifth row, Pascal wrote that ... since there are no fixed names for them, they might be called triangulo-triangular numbers. Pentatope numbers exists in the $4D$ space and describe the number of vertices in a configuration of $3D$ tetrahedrons joined at the faces.
In the standard configuration, the numbers $C^{2n}_{n}$ belong to the axis of symmetry. Numbers $\frac{1}{n+1}C^{2n}_{n}$ are known as Catalan numbers.
Every two successive triangular numbers add up to a square: $(n - 1)n/2 + n(n + 1)/2 = n^{2}.$
#### Hockey Stick Pattern
In Pascal's words (and with a reference to his arrangement), In every arithmetical triangle each cell is equal to the sum of all the cells of the preceding row from its column to the first, inclusive (Corollary 2). In modern terms,
(1)
$C^{n + 1}_{m} = C^{n}_{m} + C^{n - 1}_{m - 1} + \ldots + C^{n - m}_{0}.$
Note that on the right, the two indices in every binomial coefficient remain the same distance apart: $n - m = (n - 1) - (m - 1) = \ldots$ This allows rewriting (1) in a little different form:
(1')
$C^{m + r + 1}_{m} = C^{m + r}_{m} + C^{m + r - 1}_{m - 1} + \ldots + C^{r}_{0}.$
The latter form is amenable to easy induction in $m.$ For $m = 0,$ $C^{r + 1}_{0} = 1 = C^{r}_{0},$ the only term on the right. Assuming (1') holds for $m = k,$ let $m = k + 1:$
\begin{align} C^{k + r + 2}_{k + 1} &= C^{k + r + 1}_{k + 1} + C^{k + r + 1}_{k}\\ &= C^{k + r + 1}_{k + 1} + C^{k + r}_{k} + C^{k + r - 1}_{k - 1} + \ldots + C^{r}_{0}. \end{align}
Naturally, a similar identity holds after swapping the "rows" and "columns" in Pascal's arrangement: In every arithmetical triangle each cell is equal to the sum of all the cells of the preceding column from its row to the first, inclusive (Corollary 3).
(2)
$C^{n + 1}_{m + 1} = C^{n}_{m} + C^{n - 1}_{m} + \ldots + C^{0}_{m},$
where the second index is fixed.
#### Parallelogram Pattern
(3)
$C^{n + 1}_{m} - 1 = \sum C^{k}_{j},$
where $k \lt n,$ $j \lt m.$ In Pascal's words: In every arithmetic triangle, each cell diminished by unity is equal to the sum of all those which are included between its perpendicular rank and its parallel rank, exclusively (Corollary 4). This is shown by repeatedly unfolding the first term in (1).
#### Fibonacci Numbers
If we arrange the triangle differently, it becomes easier to detect the Fibonacci sequence:
The successive Fibonacci numbers are the sums of the entries on sw-ne diagonals:
\begin{align} 1 &= 1\\ 1 &= 1\\ 2 &= 1 + 1\\ 3 &= 1 + 2\\ 5 &= 1 + 3 + 1\\ 8 &= 1 + 4 + 3\\ 13 &= 1 + 5 + 6 + 1 \end{align}
#### The Star of David
The following two identities between binomial coefficients are known as "The Star of David Theorems":
$C^{n-1}_{k-1}\cdot C^{n}_{k+1}\cdot C^{n+1}_{k} = C^{n-1}_{k}\cdot C^{n}_{k-1}\cdot C^{n+1}_{k+1}$ and
$\mbox{gcd}(C^{n-1}_{k-1},\,C^{n}_{k+1},\,C^{n+1}_{k}) = \mbox{gcd}(C^{n-1}_{k},\,C^{n}_{k-1},\, C^{n+1}_{k+1}).$
The reason for the moniker becomes transparent on observing the configuration of the coefficients in the Pascal Triangle.
Tony Foster observed that with $k=1,$
$\displaystyle C^{n-2}_{k-1}\cdot C^{n-1}_{k+1}\cdot C^{n}_{k}=\frac{(n-2)(n-1)n}{2}=C^{n-2}_{k}\cdot C^{n-1}_{k-1}\cdot C^{n}_{k+1}$
so that
\displaystyle\begin{align} \prod_{m=1}^{N}\bigg[C^{3m-1}_{0}\cdot C^{3m}_{2}\cdot C^{3m+1}_{1} + C^{3m-1}_{1}\cdot C^{3m}_{0}\cdot C^{3m+1}_{2}\bigg] &= \prod_{m=1}^{N}(3m-2)(3m-1)(3m)\\ &= \prod_{m=1}^{3N}m = (3N)! \end{align}
#### Not without $e$
Harlan Brothers has recently discovered the fundamental constant $e$ hidden in the Pascal Triangle; this by taking products - instead of sums - of all elements in a row:
$S_{n}$ is the product of the terms in the $n$th row, then, as $n$ tends to infinity,
$\displaystyle\lim_{n\rightarrow\infty}\frac{s_{n-1}s_{n+1}}{s_{n}^{2}} = e.$
I placed the derivation into a separate file.
#### Catalan Numbers
Tony Foster's post at the CutTheKnotMath facebook page pointed the pattern that conceals the Catalan numbers:
I placed an elucidation into a separate file.
#### Sums of the Binomial Reciprocals
A post at the CutTheKnotMath facebook page by Daniel Hardisky brought to my attention to the following pattern:
I placed a derivation into a separate file.
#### Squares
As I mentioned earlier, the sum of two consecutive triangualr numbers is a square: $(n - 1)n/2 + n(n + 1)/2 = n^{2}.$ Tony Foster brought up sightings of a whole family of identities that lead up to a square.
For example,
$C^{n+2}_{3} - C^{n}_{3} = n^{2}.$
and also
$C^{n+3}_{4} - C^{n+2}_{4} - C^{n+1}_{4} + C^{n}_{4} = n^{2}.$
I placed a derivation into a separate file.
#### Squares of the Binomials
$\displaystyle\sum_{k=0}^{n}(C^{n}_{k})^{2}=C^{2n}_{n}.$
I placed a derivation into a separate file.
#### Cubes
Underfatigble Tony Foster found cubes in Pascal's triangle in a pattern that he rightfully refers to as the Star of David - another appearance of that simile in Pascal's triangle.:
$\displaystyle n^{3}=\bigg[C^{n+1}_{2}\cdot C^{n-1}_{1}\cdot C^{n}_{0}\bigg] + \bigg[C^{n+1}_{1}\cdot C^{n}_{2}\cdot C^{n-1}_{0}\bigg] + C^{n}_{1}.$
Here's his original graphics that explains the designation:
There is a second pattern - the "Wagon Wheel" - that reveals the square numbers.
I placed a derivation into a separate file.
### $pi$ in Pascal
This is due to Daniel Hardisky
$\displaystyle\pi = 3+\frac{2}{3}\bigg(\frac{1}{C^{4}_{3}}-\frac{1}{C^{6}_{3}}+\frac{1}{C^{8}_{3}}-\cdot\bigg).$
I placed a derivation into a separate file.
### Products of Binomial Coefficients
For integer $n\gt 1,\;$ let $\displaystyle P(n)=\prod_{k=0}^{n}{n\choose k}\;$ be the product of all the binomial coefficients in the $n\text{-th}\;$ row of the Pascal's triangle. Then
$\displaystyle\frac{\displaystyle (n+1)!P(n+1)}{P(n)}=(n+1)^{n+1}.$
To illustrate,
I placed a derivation into a separate file.
Eventually, Tony Foster found an extension to other integer powers:
I placed a derivation into a separate file.
### References
1. J. H. Conway and R. K. Guy, The Book of Numbers, Springer-Verlag, NY, 1996.
2. H. Eves, Great Moments in Mathematics After 1650, MAA, 1983
3. Great Books of the Western World, v 33, Encyclopaedia Britannica, Inc., 1952.
4. P. Hilton, D. Holton, J. Pederson, Mathematical Reflections, Springer Verlag, 1997
5. A. R. Kanga, Number Mosaics, World Scientific Co., 1995.
6. R. Graham, D. Knuth, O. Patashnik, Concrete Mathematics, 2nd edition, Addison-Wesley, 1994.
7. J. A. Paulos, Beyond Numeracy, Vintage Books, 1992
8. H.-O. Peitgen et al, Chaos and Fractals: New Frontiers of Science , Springer, 2nd edition, 2004
9. D. E. Smith, History of Mathematics, Dover, 1968
10. D. E. Smith, A Source Book in Mathematics, Dover, 1959
11. D. Wells, The Penguin Dictionary of Curious and Interesting Numbers, Penguin Books, 1987 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699181914329529, "perplexity": 1244.5621937568105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00220.warc.gz"} |
https://mathoverflow.net/questions/249300/what-is-the-motivation-behind-the-characteristic-variety-of-a-d-module-and-what | What is the motivation behind the characteristic variety of a D-module and what does it's geometry tell me about the D-module?
Given a smooth algebraic variety $X$, and an $\mathcal{M}\in \text{Mod}(D_X)$, there is the characteristic variety of $\mathcal{M}$ defined as $$\text{Char}(\mathcal{M}):= V\left(\sqrt{Ann(\mathcal{M})}\right) \subset T^*X$$ These varieties have a number of nice properties
1. Their dimension is equal to the dimension of the underlying $D_X$-module
2. Their dimension is greater than or equal to the dimension of $X$
3. Behaves well with restriction to open subsets of $X$
4. They behave well with respect to exact sequences of coherent $D_X$-modules
5. They are coisotropic subvarieties of $T^*X$
6. They are lagrangian iff the underlying D-module is holonomic
Unfortunately, it's not clear why these varieties are useful and what their motivation for construction is.
• I don't recall the details (I am not an analyst), but the motivation comes primarily from distribution theory; the characteristic variety of a holonomic D-module (which as you know is cyclic, generated by a distribution) is related to the singular spectrum of the distribution. I would go have a look at the original work of Kashiwara and Saito, it might be enlightening. – Ketil Tveiten Sep 8 '16 at 8:15
• You may also think of it as an invariant of the PDE, for example the classical distinction between elliptic parabolic or hyperbolic PDE can be read from the characteristic variety. – Michael Bächtold Sep 8 '16 at 18:12
Here’s one way to think about them: they tell you how far a $$D$$-module is from being an integrable connection (i.e. finitely generated over $$\mathcal O$$). Here’s what I mean: let $$M$$ be a $$D$$-module on $$X$$. Then $$M$$ is an integrable connection in a neighborhood of a point $$x\in X$$ if and only if $$\operatorname{Char}(M)\cap T^*_x X$$ is zero (i.e. is contained in the zero section).
I also want to correct your point number 1. The dimension of a $$D$$-module is by definition the dimension of its characteristic variety. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909252882003784, "perplexity": 211.5541044014316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00494.warc.gz"} |
http://mathhelpforum.com/calculus/4900-bisection-root-algorithm-print.html | # bisection root algorithm
• Aug 13th 2006, 08:36 AM
bobby77
find the root of f(x)=2x^3-x^2+x-1=0 to 3 decimal places using interval bisection.
• Aug 13th 2006, 08:57 AM
CaptainBlack
1 Attachment(s)
Quote:
Originally Posted by bobby77
find the root of f(x)=2x^3-x^2+x-1=0 to 3 decimal places using interval bisection.
First sketch the graph to locate the approximate position of the root.
See attachment:
Now we see the root is located at about $x=0.7$. Now we need
a pair of values $x_{lo}, x_{hi}$ such that $x_{lo} such that
$sgn(f(x_{lo}) \ne sgn(f(x_{hi})$.
It looks as though $0.6, 0.8$ will do for these (check).
Now we need a table:
Code:
``` x_lo x_hi f(x_lo) f(x_hi) x_mid f(x_mid) 0.6 0.8 -0.328 0.184 0.7 -0.104```
This is the first row, the next row is obtained by replacing $x_{lo}$ or $x_{hi}$ by $x_{mid}$, so that we
still bracket the zero, thus:
Code:
``` x_lo x_hi f(x_lo) f(x_hi) x_mid f(x_mid) 0.6 0.8 -0.328 0.184 0.7 -0.104 0.7 0.8 -0.104 0.184 0.75 0.0312 0.7 0.75 -0.104 0.0312 0.725 -0.0385```
and so on utill $|x_{hi}-x_{lo}|<0.001$.
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721004128456116, "perplexity": 4462.355602428712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725475.41/warc/CC-MAIN-20161020183845-00141-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/53135-word-combinations-problem.html | # Thread: word combinations problem
1. ## word combinations problem
How many ways can 4 people be chosen from a group of 9? Tell whether the situation is a permutation of combination. Then solve.
2. Originally Posted by biggestbernard1
How many ways can 4 people be chosen from a group of 9? Tell whether the situation is a permutation of combination. Then solve.
this is just "9 choose 4", or in other words, ${9 \choose 4} = _9C_4$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145352244377136, "perplexity": 555.285515363489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541134.5/warc/CC-MAIN-20161202170901-00431-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://simple.wikipedia.org/wiki/Damping | Damping
Graph of a damped vibratory deflection y.
Damped spring.
In physics, damping is any effect that tends to reduce the amplitude of vibrations.[1]
In mechanics, the internal friction may be one of the causes of such damping effect. For many purposes the damping force Ff can be modeled as being proportional to the velocity v of the object:
${\displaystyle F_{\mathrm {f} }=-cv\,,}$
where c is the damping coefficient, given in units of Newton-seconds per meter.
References
1. Tongue, Benson, Principles of Vibration, Oxford University Pres, 2001, ISBN 0-195-142462 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681133031845093, "perplexity": 2319.809388893521}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320261.6/warc/CC-MAIN-20170624115542-20170624135542-00081.warc.gz"} |
http://al-roomi.org/economic-dispatch/15-units-system | # IEEE 15-Units ELD Test System
I. Introduction:
$$\bullet$$ This system contains fifteen generating units with a load demand of 2630 MW. Some researchers could evaluate their proposed optimization algorithms with different load demands.
$$\bullet$$ The fuel-cost function of this test system is modeled using the quadratic cost function as follows:
$$C_i\left(P_i\right) = a_i + b_i P_i + c_i P^2_i$$ .......... $$(1)$$
where $$a_i$$, $$b_i$$, and $$c_i$$ are the function coefficients and tabulated in Table 1.
$$\bullet$$ The network losses are modeled, by using Kron's loss formula, as follows:
$$P_L = \sum_{i=1}^{n} \sum_{j=1}^{n} P_i B_{ij} P_j + \sum_{i=1}^{n} B_{0i} P_i + B_{00}$$ .......... $$(2)$$
where $$B_{ij}$$, $$B_{0i}$$, and $$B_{00}$$ are called loss coefficients (or just B-coefficients) and listed below:
$$\bullet$$ If the ramp-rate limits are modeled as constraints in the design function, then the feasible search space of each unit is determined through the following equation:
$$\text{max}\left(P_i^{min},P_i^{now}-R_i^{down}\right) \leqslant P_i^{new} \leqslant \text{min}\left(P_i^{max},P_i^{now}+R_i^{up}\right)$$ .......... $$(3)$$
where $$P_i^{now}$$ and $$P_i^{new}$$ are respectively the existing and new power output of the $$i$$th generator. $$R_i^{down}$$ and $$R_i^{up}$$ are respectively the downward and upward ramp-rate limits, which are tabulated in Table 2.
$$\bullet$$ Also, if the prohibited operating zone phenomenon is considered in the design function of the ELD problem, then the fuel-cost curves will have some discontinuities. This constraint can be modeled as follows:
\begin{align} P_i^{min} & \leqslant P_i \leqslant P_{i,j}^L \\ P_{i,j}^U & \leqslant P_i \leqslant P_{i,j+1}^L \text{ .......... $$(4)$$}\\ P_{i,\varkappa_i}^U & \leqslant P_i \leqslant P_i^{max}\end{align}
where $$P_{i,j}^L$$ and $$P_{i,j}^U$$ are respectively the lower and upper bounds of the $$j$$th prohibited operating zone on the fuel-cost curve of the $$i$$th unit. $$\varkappa_i$$ means the total number of the prohibited operating zones exist within the $i$th unit. Based on that, Table 2 is updated to Table 3.
$$\bullet$$ In [6] and [11], the valve-point loading effects are considered in the cost function, so (1) becomes:
$$C_i\left(P_i\right) = a_i + b_i P_i + c_i P^2_i + \left|d_i \times \sin\left[e_i \times \left(P_i^{min} - P_i\right) \right]\right|$$ .......... $$(5)$$
where $$d_i$$ and $$e_i$$ are the coefficients of the valve-point loading effects. Thus, Table 3 is expanded to Table 4.
$$\bullet$$ The valve-point loading effects can be relaxed if either $$d$$ or $$e$$ of all units are set to zero.
II. Files:
$$\bullet$$ System Data (Text Format) [Download]
III. References (Some selected papers that use this test system):
[1] F. N. Lee and A. M. Breipohl, “Reserve Constrained Economic Dispatch with Prohibited Operating Zones,” IEEE Trans. Power Syst., vol. 8, no. 1, pp. 246–254, 1993.
[2] Z.-L. Gaing, “Particle Swarm Optimization to Solving the Economic Dispatch Considering the Generator Constraints,” IEEE Trans. Power Syst., vol. 18, no. 3, pp. 1187–1195, Aug. 2003.
[3] L. dos S. Coelho and V. C. Mariani, “Improved Differential Evolution Algorithms for Handling Economic Dispatch Optimization with Generator Constraints,” Energy Convers. Manag., vol. 48, no. 5, pp. 1631–1639, Jan. 2007.
[4] K. T. Chaturvedi, M. Pandit, and L. Srivastava, “Self-Organizing Hierarchical Particle Swarm Optimization for Nonconvex Economic Dispatch,” IEEE Trans. Power Syst., vol. 23, no. 3, pp. 1079–1087, Aug. 2008.
[5] C. C. Kuo, “A Novel Coding Scheme for Practical Economic Dispatch by Modified Particle Swarm Approach,” IEEE Trans. Power Syst., vol. 23, no. 4, pp. 1825–1835, Nov. 2008.
[6] G. Shabib, M. A. Gayed, and A. M. Rashwan, “Modified Particle Swarm Optimization for Economic Load Dispatch with Valve-Point Effects and Transmission Losses,” Curr. Dev. Artif. Intell., vol. 2, no. 1, pp. 39–49, 2011.
[7] M. I. Abouheaf, S. Haesaert, W. Lee, and F. L. Lewis, “Q-Learning with Eligibility Traces to Solve Non-Convex Economic Dispatch Problems,” Int. J. Electr. Robot. Electron. Commun. Eng., vol. 6, no. 7, pp. 41–48, 2012.
[8] A. Nazari and A. Hadidi, “Biogeography Based Optimization Algorithm for Economic Load Dispatch of Power System,” Am. J. Adv. Sci. Res., vol. 1, no. 3, pp. 99–105, Sep. 2012.
[9] G. Xiong, D. Shi, and X. Duan, “Multi-Strategy Ensemble Biogeography-Based Optimization for Economic Dispatch Problems,” Appl. Energy, vol. 111, pp. 801–811, Jun. 2013.
[10] K. Zare and T. G. Bolandi, “Modified Iteration Particle Swarm Optimization Procedure for Economic Dispatch Solving with Non-Smooth and Non-Convex Fuel Cost Function,” in 3rd IET International Conference on Clean Energy and Technology (CEAT) 2014, 2014, pp. 1–6.
[11] Hardiansyah, “Modified Differential Evolution Algorithm for Economic Load Dispatch Problem with Valve-Point Effects,” Int. J. Adv. Res. Electr. Electron. Instrum. Eng., vol. 3, no. 11, pp. 13400–13409, Nov. 2014. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8305347561836243, "perplexity": 3447.0085600731804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804680.40/warc/CC-MAIN-20171118075712-20171118095712-00541.warc.gz"} |
https://www.physicsforums.com/threads/lightly-damped-harmonic-oscillator.79617/ | # Lightly Damped Harmonic Oscillator
1. Jun 19, 2005
### e(ho0n3
Question:
(a) Show that the total mechanical energy of a lightly damped harmonic oscillator is
$$E = E_0 e^{-bt/m}$$
where $E_0$ is the total mechanical energy at t = 0.
(b) Show that the fractional energy lost per period is
$$\frac{\Delta E}{E} = \frac{2 \pi b}{m \omega_0} = \frac{2 \pi}{Q}$$
where $\omega_0 = \sqrt{k/m}$ and $Q = m \omega_0 / b$ is called the quality factor or Q value of the system. A larger Q value means the system can undergo oscillations for a longer time.
(a) When the velocity of the oscillator is 0, the total mechanical energy is purely potential energy, $U = 1/2kx^2$. Since I know that $x = Ae^{-bt/(2m)}\cos{\omega't}$ where t is some multiple of $2\pi/\omega'$, then
$$E = \frac{1}{2}kA^2e^{-bt/m}$$
and $E_0 = 1/2kA^2$. Of course, this is only valid when the velocity of the oscillator is 0, but since it is lightly damped the total mechanical energy should be approximately the same when the velocity is > 0. Right?
(b) Using some calculus, I can timidly state that
$$\frac{\Delta E}{\Delta t} \approx \frac{dE}{dt} = -\frac{E_0b}{m}e^{bt/m}$$
and since $\Delta t = 2\pi / \omega'$ then
$$\frac{\Delta E}{E} = -\frac{b\Delta t}{m} = -\frac{2\pi b}{m\omega'}$$
Since the oscillator is lightly damped, $\omega' \approx \omega_0$. However the result I get is negative. Should it be negative?
2. Jun 19, 2005
### OlderDan
Yes, it should be negative because you calculated the rate of change of energy, which is decreasing with time. The question asked for the fractional energy loss, which is the absolute value of the energy change.
Your calculation of the times at which the energy is all potential is overlooking the fact that the peaks in x do not correspond to the points where the cosine has value 1 because the exponential is also time dependent. However, the times between peaks still satisfy the condition you used, so the result follows subject to the other approximations you made.
Similar Discussions: Lightly Damped Harmonic Oscillator | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709216952323914, "perplexity": 248.47034494461198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00087.warc.gz"} |
https://www.quantumdiaries.org/2011/07/22/the-birds-and-the-bs/ | ## View Blog | Read Bio
### The Birds and the Bs
`Yesterday marked the beginning of the HEP summer conference season with EPS-HEP 2011, which is particularly exciting since the LHC now has enough luminosity (accumulated data) to start seeing hints of new physics. As Ken pointed out, the Tevatron’s new lower bound on the Bs → μμ decay rate seemed to be a harbinger of things to come (Experts can check out the official paper, the CDF public page, and the excellent summaries by Tommaso Dorigo and Jester.).
Somewhat unfortunately, the first LHCb results on this process do not confirm the CDF excess, though they are not yet mutually exclusive. Instead of delving too much into this particular result, I’d like to give some background to motivate why it’s interesting to those of us looking for new physics. This requires a lesson in “the birds and the Bs”—of course, by this I mean B mesons and the so-called ‘penguin’ diagrams.
## The Bs meson: why it’s special
It's a terrible pun, I know.
A Bs meson is a bound state of a bottom anti-quark and strange quark; it’s sort of like a “molecule” of quarks. There are all sorts of mesons that one could imagine by sticking together different quarks and anti-quarks, but the Bs meson and it’s lighter cousin, the Bd meson, are particularly interesting characters in the spectrum of all possible mesons.
The reason is that both the Bs and the Bd are neutral particles, and it turns out that they mix quantum mechanically with their antiparticles, which we call the Bs and Bd. This mixing is the exact same kind of flavor phenomenon that we described when we mentioned “Neapolitan” neutrinos and is analogous to the mixing of chiralities in a massive fermion. Recall that properties like “bottom-ness” or “strangeness” are referred to as flavor. Going from a Bs to a Bs changes the “number of bottom quarks” from -1 to +1 and the “number of strange quarks” from +1 to -1, so such effects are called flavor-changing.
To help clarify things, here’s an example diagram that encodes this quantum mixing:
The ui refers to any up-type quark.
Any neutral meson can mix—or “oscillate”—into its antiparticle, but the B mesons are special because of their lifetime. Recall that mesons are unstable and decay, so unlike neutrinos, we can’t just wait for a while to see if they oscillate into something interesting. Some mesons live for too long and their oscillation phenomena get ‘washed out’ before we get to observe them. Other mesons don’t live long enough and decay before they have a chance to oscillate at all. But B mesons—oh, wonderful Goldilocks B mesons—they have a lifetime and oscillation time that are roughly of the same magnitude. This means that by measuring their decays and relative decay rates we can learn about how these mesons mix, i.e. we can learn about the underlying flavor structure of the Standard Model.
Historical remark: The Bd meson is special for another reason: by a coincidence, we can produce them rather copiously. The reason is that the Bd meson mass just happens to be just under half of the mass of the Upsilon 4S particle, ϒ(4S), which just happens to decay into a BdBd pair. Thus, by the power of resonances, we can collide electrons and positrons to produce lots of upsilons, which then decay in to lots of B mesons. For the past decade flavor physics focused around these ‘B factories,’ mainly the BaBar detector at SLAC and Belle in Japan. BaBar has since been retired, while Belle is being upgraded to “Super Belle.” For the meanwhile, the current torch-bearer for B-physics is LHCb.
## The CDF and LHCb results: Bs → mu mu
It turns out that there are interesting flavor-changing effects even without considering meson mixing, but rather in the decay of the B meson itself. For example, we can modify the previous diagram to consider the decay of a Bs meson into a muon/anti-muon pair:
This is still a flavor-changing decay since the net strangeness (+1) and bottom-ness (-1) is not preserved; but note that the lepton flavor is conserved since the muon/anti-muon pair have no net muon number. (As an exercise: try drawing the other diagrams that contribute; the trick is that you need W bosons to change flavor.) You could also replace muons by electrons or taus, but those decays are much harder to detect experimentally. As a rule of thumb muons are really nice final state particles since they make it all the way through the detector and one has a decent shot at getting good momentum measurements.
It turns out that this decay is extremely rare. For the Bs meson, the Standard Model predicts a dimuon branching ratio of around 3 × 10-9, which means that a Bs will only decay into two muons 0.0000003% of the time… clearly in order to accurately measure the actual rate one needs to produce a lot of B mesons.
In fact, until recently, we simply did not have enough observed B meson decays to even estimate the true dimuon decay rate. The ‘B factories’ of the past decade were only able to put upper limits on this rate. In fact, this decay is one of the main motivations for LHCb, which was designed to be the first experiment that would be sensitive enough to probe the Standard Model decay rate. (This means that if the decay rate is at least at the Standard Model rate, then LHCb will see it.)
The exciting news from CDF last week was that—for the first time—they appeared to have been able to set a lower bound on the dimuon decay rate of the Bs meson. (The Bd meson has a smaller decay rate and CDF was unable to set a lower bound.) The lower bound is still statistically consistent with the Standard Model rate, but the suggested (‘central value’) rate was 1.8 × 10-8. If this is true, then it would be a fairly strong signal for new physics beyond the Standard Model. The 90% confidence level range from CDF is:
4.6 × 10-9 < BR(Bs → μ+μ) < 3.9 × 10-8.
Unfortunately, today’s new result from LHCb didn’t detect an excess with which it could set a lower bound and could only set a 90% confidence upper bound,
BR(Bs → μ+μ) < 1.3 × 10-8.
This goes down to 1.2 × 10-8 when including 2010 data. The bounds are not yet at odds with one another, but many people were hoping that LHCb would have been able to confirm the CDF excess in dimuon events. The analyses of the two experiments seem to be fairly similar, so there isn’t too much wiggle room to think that the different results just come from having different experiments.
More data will clarify the situation; LHCb should accumulate enough data to prove branching ratios down to the Standard Model prediction of 3 × 10-9. Unfortunately CDF will not be able to reach that sensitivity.
## New physics in loops
Now that we’re up to date with the experimental status of Bs → μμ, let’s figure out why it’s so interesting from a theoretical point of view. One thing you might have noticed from the “box” Feynman diagrams above is that they involve a closed loop. An interesting thing about closed loops in Feynman diagrams is that they can probe physics at much higher energies than one would naively expect.
The reason for this is that the particles running in the loop do not have their momenta fixed in terms of the momenta of the external particles. You can see this for yourself by assigning momenta (call them p1, p2, … , etc.) to each particle line and (following the usual Feynman rules) impose momentum conservation at each vertex. You’ll find that there is an unconstrained momentum that goes around the loop. Because this momentum is unspecified, the laws of quantum physics say that one must add together the contributions from all possible momenta. Thus it turns out that even though the Bs meson mass is around 5 GeV, the dimuon decay is sensitive to particles that are a hundred times heavier.
Note that unlike other processes where we study new physics by directly producing it and watching it decay, in low-energy loop diagrams one only intuits the presence of new particles through their virtual effects (quantum interference). I’ll leave the details for another time, but here are a few facts that you can assume for now:
1. Loop diagrams can be sensitive to new heavy particles through quantum interference.
2. Processes which only occur through loop diagrams are often suppressed. (This is partly why the Standard Model branching ratio for Bs → μμ is so small.)
3. In the Standard Model, all flavor-changing neutral currents (FCNC)—i.e. all flavor-changing processes whose intermediate states carry no net electric charge—only occur at loop level. (Recall that the electrically-charged W bosons can change flavor, but the electrically neutral Z bosons cannot. Similarly, note that there is no way to draw a Bs → μμ diagram in the Standard Model without including a loop.)
4. Thus, processes with a flavor-changing neutral current (such as Bs → μμ) are fruitful places to look for new physics effects that only show up at loop level. If there were a non-loop level (“tree level”) contribution from the Standard Model, then the loop-induced new physics effects would tend to be drowned out because they are only small corrections to the tree-level result. However, since there are no FCNCs in the Standard Model, the new physics contributions have a ‘fighting change’ at having a big effect relative to the Standard Model result.
5. Semi-technical remark, for experts: indeed, for Bs → μμ the Standard Model diagrams are additionally suppressed by a GIM suppression (as is the case for FCNCs) as well as helicity suppression (the B meson is a pseudoscalar, so the final states require a muon mass insertion).
So the punchline is that Bs → μμ is a really fertile place to hope to see some deviation from the Standard Model branching ratio due to new physics.
## Introducing the Penguin
I would be remiss if I didn’t mention the “penguin diagram” and its role in physics. You can learn about the penguin’s silly etymology in its Wikipedia article; suffice it for me to ‘wow’ you with a picture of an autographed paper from one of the penguin’s progenitors:
A copy of the original "penguin" paper, autographed by John Ellis.
The main idea is that penguin diagrams are flavor-changing loops that involve two fermions and a neutral gauge boson. For example, the b→s penguin takes the form (no, it doesn’t look much like a penguin)
You should have guessed that in the Standard Model, the wiggly line on top has to be a W boson in order for the fermion line to change flavors. The photon could also be a Z boson, a gluon, or even a Higgs boson. If we allow the boson to decay into a pair of muons, we obtain a diagram that contributes to Bs → μμ.
Some intuition for why the penguin takes this particular form: as mentioned above, any flavor-changing neutral transition in the Standard Model requires a loop. So we start by drawing a diagram with a W loop. This is fine, but because the b quark is so much heavier than the s quark, the diagram does not conserve energy. We need to have a third particle which carries away the difference in energy between the b and the s, so we allow the loop to emit a gauge boson. And thus we have the diagram above.
Thus, in addition to the box diagrams above, there are penguin diagrams which contribute to Bs → μμ. As a nice ‘homework’ exercise, you can try drawing all of the penguins that contribute to this process in the Standard Model. (Most of the work is relabeling diagrams for different internal states.)
[Remark, 6/23: my colleague Monika points out that it’s ironic that I drew the b, s, photon penguin since this penguin doesn’t actually contribute to the dimuon decay! (For experts: the reason is the Ward identity.) ]
## Supersymmetry and the Bs → mu mu penguin
Finally, I’d like to give an example of a new physics scenario where we would expect that penguins containing new particles give a large contribution to the Bs → μμ branching ratio. It turns out that this happens quite often in models of supersymmetry or, more generally, ‘two Higgs doublet models.’
If neither of those words mean anything to you, then all you have to know is that these models have not just one, but two independent Higgs particles which obtain separate vacuum expectation values (vevs). The punchline is that there is a free parameter in such theories called tan β which measures the ratio of the two vevs, and that for large values of tan β, the Bs → μμ branching ratio goes like (tan β)6 … which can be quite large and can dwarf the Standard Model contribution.
Added 6/23, because I couldn't help it: a supersymmetric penguin. Corny image from one of my talks.
[What follows is mostly for ‘experts,’ my apologies.]
On a slightly more technical note, it’s not often well explained why this branching ratio goes like the sixth power of tan β, so I did want to point this out for anyone who was curious. There are three sources of tan β in the amplitude; these all appear in the neutral Higgs diagram:
Each blue dot is a factor of tan β. The Yukawa couplings at each Higgs vertex goes like the fermion mass divided by the Higgs vev. For the down-type quarks and leptons, this gives a factor of m/v ~ 1/cos β ~ tan β for large tan β. An additional factor of comes from the mixing between the s and b quarks, which also goes like the Yukawa coupling. (This is the blue dot on the s quark leg.) Hence one has three powers of tan β in the amplitude, and thus six powers of tan β in the branching ratio.
## Outlook
While the LHCb result was somewhat sobering, we can still cross our fingers and hope that there is still an excess to be discovered in the near future. The LHC shuts down for repairs at the end of next year; this should provide ample data for LHCb to probe all the way down to the Standard Model expectation value for this process. Meanwhile, it seems that while I’ve been writing this post there have been intriguing hints of a Higgs (also via our editor)… [edit, 6/23: Aidan put up an excellent intro to these results] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407786130905151, "perplexity": 869.3774746421174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00193.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=374684 | Recognitions:
Gold Member
## A glance beyond the quantum model
I wanted to follow up on a couple of specific points that were raised in another thread, and felt it would be better to split the discussion off here. The references for the discussion are:
A glance beyond the quantum model, Navascues and Wunderlich (2009)
"One of the most important problems in Physics is how to reconcile Quantum Mechanics with General Relativity. Some authors have suggested that this may be realized at the expense of having to drop the quantum formalism in favor of a more general theory. However, as the experiments we can perform nowadays are far away from the range of energies where we may expect to observe non-quantum effects, it is difficult to theorize at this respect. Here we propose a fundamental axiom that we believe any reasonable post-quantum theory should satisfy, namely, that such a theory should recover classical physics in the macroscopic limit. We use this principle, together with the impossibility of instantaneous communication, to characterize the set of correlations that can arise between two distant observers. Although several quantum limits are recovered, our results suggest that quantum mechanics could be falsified by a Bell-type experiment if both observers have a sufficient number of detectors. "
...And a recent comment on the above (by a PF member no less :)
Comment on "A glance beyond the quantum model", Peter Morgan (2010)
"The aim of "A glance beyond the quantum model" [arXiv:0907.0372] to modernize the Correspondence Principle is compromised by an assumption that a classical model must start with the idea of particles, whereas in empirical terms particles are secondary to events. The discussion also proposes, contradictorily, that observers who wish to model the macroscopic world classically should do so in terms of classical fields, whereas, if we are to use fields, it would more appropriate to adopt the mathematics of random fields. Finally, the formalism used for discussion of Bell inequalities introduces two assumptions that are not necessary for a random field model, locality of initial conditions and non-contextuality, even though these assumptions are, in contrast, very natural for a classical particle model. Whether we discuss physics in terms of particles or in terms of events and (random) fields leads to differences that a glance would be well to notice. "
----------
Of interest - and there has been recent discussion about several of these points - are the following:
a) Can you speak of particles without discussing the associated fields?
b) Are the fields themselves discrete or continuous?
c) It the correspondence between the macroscopic world and the microscopic world fundamental? Can we recover certain classical concepts - such as "no-signaling principle" or the introduced idea of "macroscopic locality" when a large number of particles are involved and our measurement devices fail to resolve discrete particles?
d) Are their low-energy Bell-type experiments that can set limits on the unification of quantum theory and gravity?
e) Anything else you might think of from the above...
Quote by DrChinese Of interest - and there has been recent discussion about several of these points - are the following: a) Can you speak of particles without discussing the associated fields? b) Are the fields themselves discrete or continuous? c) It the correspondence between the macroscopic world and the microscopic world fundamental? Can we recover certain classical concepts - such as "no-signaling principle" or the introduced idea of "macroscopic locality" when a large number of particles are involved and our measurement devices fail to resolve discrete particles? d) Are their low-energy Bell-type experiments that can set limits on the unification of quantum theory and gravity? e) Anything else you might think of from the above...
a) Not easily, but one can talk about measurement results (which one can say are properties of the measured systems, even though the systems themselves are not seen) in a finite-dimensional Hilbert space context, without introducing the Schrodinger equation or a quantum field. Indeed, quantum information lives very happily in this regime and often is thought to be fundamental.
b) One can work with either discrete or continuous mathematics. QFT on a lattice has certainly taught us stuff. Universality makes it difficult to make a categorical statement here, because discrete structure at the Planck scale would presumably be washed out at the scales at which we have any present hope of making measurements.
c) I think I take the Correspondence Principle to be a methodological requirement that comes from social issues in science. If one wants a new theory to get a running start, it helps to be able to point out how it is the same as and how it is different from the theories we currently take to be empirically effective. If one can show that the new theory can legitimately adopt a lot of the empirical effectiveness of an existing theory, it leaves less to do to establish how the new theory is better. The Correspondence Principle was used with stunning effectiveness by the founders of the new quantum theory in the late 20s to constrain quantum theory, so it seems worthwhile to attempt the same sort of approach now.
From a Correspondence Principle point of view, given that our current best theories are QFT and GR, it makes sense to stick with a field approach of some sort in attempts to construct new theories. The choice of an effective mathematical structure is important. Part of my enthusiasm for random fields is that they are a powerful generalization of classical differentiable fields that introduces the concept of probability in a mathematically correct way (whereas differentiable fields don't sit well with the measure theory), and which can be presented in a way that is very closely parallel to quantum fields. Retrospectively, a random field formalism might make a Correspondence Principle approach more possible.
However, this does not, to my more-or-less empiricist approach, entail that the world is continuous, only that it can be useful to use continuum mathematics, always supposing that we can get finite answers out, somehow.
d) Dunno. When I critiqued Navascués' and Wunderlich's assumptions I was careful not to get into the actual purpose of their paper because I know not much about QG.
e) Even less to say. Except that I believe the published version of the Navascués and Wunderlich paper is free to access on Proc. Roy. Soc. A, at http://rspa.royalsocietypublishing.o...t/466/2115/881, and is preferable, from the point of view of my Comment, because they introduced classical fields into their paper only after the current arXiv version.
Recognitions:
Quote by DrChinese a) Can you speak of particles without discussing the associated fields?
I've been wondering about a similar question in a different context,
namely the problem of IR divergences in QED, and correct identification
of the asymptotic dynamics, and hence also the asymptotic fields.
(I'm not sure whether what follows is tangential to your focus in this
thread, but I guess you'll tell me if so. :-)
In brief, I find it interesting that considering charged particles
together with their Coulomb fields seems to solve the IR problem in
QED in a much more physically satisfactory way than the typical
textbook treatments. This (imho) lends credence to the proposition that
it's better to treat particles together with their entourage of associated
fields, although such composite dressed entities are of course nonlocal.
For those who want more detail, here's an extract from a summary I've
been writing up for myself about it...
-----------------------------
Textbook treatments of the infrared (IR) divergences in quantum
electrodynamics (QED) typically introduce a small fictitious photon
mass to regularize the integrals. Allowing this mass to approach zero,
it becomes necessary to sum physically measurable quantities, such as
the cross sections for electron scattering, over all possible
asymptotic states involving an infinite number of soft photons, yielding
the so-called "inclusive" cross section.
The IR divergences are thus dealt with by restricting attention only to these
"IR-safe" quantities such as the inclusive cross section. However, various
authors have expressed dissatisfaction with this state of affairs in which
the cross sections become the objects of primary interest rather than the
S-matrix. The seminal paper of Chung (Chu) showed how one may
dress the asymptotic electron states with an operator familiar from the
Glauber theory of photon coherent states, thereby eliminating IR divergences
in the S-matrix to all orders for the cases he considered.
In a series of papers, Kibble (Kib1,Kib2,Kib3,Kib4) provided a much
more extensive (and more rigorous) development of Chung's idea, solving
the dynamical problem to show that IR divergences are eliminated by
dressing the asymptotic electron states by coherent states of soft
photons. Kibble constructed a very large nonseparable state space, within
which various separable subspaces are mapped into each other by the
S-matrix, but there is no stable separable subspace that is mapped into
itself.
less cumbersome treatment involving modification of the asymptotic
condition and a new space of asymptotic states which is not only
separable, but also relativistically and gauge invariant. They were
able to derive Chung's formulas without the laborious calculations of
Kibble, yet also obtained a more satisfactory generalization to the
case of arbitrary numbers of charged particles and photons in the
initial and final states.
KF emphasized the role of the nonvanishing interaction of QED at
asymptotic times as the source of the problems. This inconvenient fact
means that QED's asymptotic dynamics is not governed by the usual
free Hamiltonian $$H_0$$, so perturbative approaches starting from such
free states are singular (a so-called "discontinuous" perturbation).
Standard treatments rely on the unphysical fiction of adiabatically
switching off the interaction, but KF wished to find a more physically
satisfactory operator governing the asymptotic dynamics.
Much earlier, Dirac (Dir55) took some initial steps in
constructing a manifestly gauge-invariant electrodynamics. The dressing
operator he obtained is a simplified version of those mentioned above
involving soft-photon coherent states, but he did not
address the IR divergences in this paper. Neither Chung, Kibble, nor
Kulish and Faddeev cite Dirac's paper, and the connection between explicit
gauge invariance and resolution of the IR problem did not emerge until
later. (Who was the first to note this??) In 1965 Dirac noted (Dir65, Dir66)
that problems in QED arise because the full gauge-invariant Hamiltonian is
typically split into a "free" part $$H_0$$
and an "interaction" part $$H_I$$ which are not separately
gauge-invariant. Indeed, Dirac's original 1955 construction had
resulted in an electron together with its Coulomb field, which is
clearly a more physically correct representation of electrons at
asymptotic times: a physical electron is always accompanied by its
Coulomb field.
More recently, Bagan, Lavelle and McMullan (BagLavMcMul-1, BagLavMcMul-2)
("BLM" hereafter) and other collaborators have developed these ideas
further, applying them to IR divergences in QED, and also QCD in which
a different class of so-called "collinear" IR divergence occurs. (See
also the references therein.) These authors generalized Dirac's
construction to the case of moving charged particles. Their dressed
asymptotic fields include the asymptotic interaction, and they show
that the on-shell Green's functions and S-matrix elements for these
charged fields have (to all orders) the pole structure associated with
particle propagation and scattering.
--------------------------------------------
Bibliography for the above:
\bibitem{BagLavMcMul-1}
E. Bagan, M. Lavelle, D. McMullan,
"Charges from Dressed Matter: Construction",
(Available as hep-ph/9909257.)
\bibitem{BagLavMcMul-2}
E. Bagan, M. Lavelle, D. McMullan,
"Charges from Dressed Matter: Physics \& Renormalisation",
(Available as hep-ph/9909262.)
\bibitem{Bal} L. Ballentine,
"Quantum Mechanics -- A Modern Development",
World Scientific, 2008, ISBN 978-981-02-4105-6
\bibitem{Chu}
V. Chung,
"Infrared Divergences in Quantum Electrodynamics",
Phys. Rev., vol 140, (1965), B1110.
\bibitem{Dir55}
P.A.M. Dirac,
"Gauge-Invariant Formulation of Quantum Electrodynamics",
Can. J. Phys., vol 33, (1955), p. 650.
\bibitem{Dir65}
P.A.M. Dirac,
Phys. Rev., vol 139, (1965), B684-690.
\bibitem{Dir66}
P.A.M. Dirac,
"Lectures on Quantum Field Theory",
Belfer Graduate School of Science, Yeshiva Univ., NY, 1966
\bibitem{Dol}
J. D. Dollard,
"Asymptotic Convergence and the Coulomb Interaction",
J. Math. Phys., vol, 5, no. 6, (1964), 729-738.
\bibitem{Kib1}
T.W.B. Kibble,
"Coherent Soft-Photon States \& Infrared Divergences. I.
Classical Currents",
J. Math. Phys., vol 9, no. 2, (1968), p. 315.
\bibitem{Kib2}
T.W.B. Kibble,
"Coherent Soft-Photon States \& Infrared Divergences. II.
Mass-Shell Singularities of Green's Functions",
Phys. Rev., vol 173, no. 5, (1968), p. 1527.
\bibitem{Kib3}
T.W.B. Kibble,
"Coherent Soft-Photon States \& Infrared Divergences.
III. Asymptotic States and Reduction Formulas.",
Phys. Rev., vol 174, no. 5, (1968), p. 1882.
\bibitem{Kib4}
T.W.B. Kibble,
"Coherent Soft-Photon States \& Infrared Divergences.
IV. The Scattering Operator.",
Phys. Rev., vol 175, no. 5, (1968), p. 1624.
J. R. Klauder \& B. Skagerstam,
"Coherent States -- Applications in Physics \& Mathematical Physics",
World Scientific, 1985, ISBN 9971-966-52-2
"Asymptotic Conditions and Infrared Divergences in Quantum
Electrodynamics",
Theor. Math. Phys., vol 4, (1970), p. 745
-----------------------------
Recognitions:
## A glance beyond the quantum model
Quote by Peter Morgan Part of my enthusiasm for random fields is that they are a powerful generalization of classical differentiable fields that introduces the concept of probability in a mathematically correct way (whereas differentiable fields don't sit well with the measure theory)
It's been a while since I looked at your papers, and I don't remember the point about
"random fields ... introducing the concept of probability in a mathematically correct way".
Could you elaborate on the details of this point, and/or give specific places in your
earlier papers where you discuss this, please?
I found your discussion of IR divergences interesting, and I've bookmarked it both for the discussion and for the references, but I regret that I can't speak to it at this point, except to say that I've never seen anything that makes dressed particles look conceptually simple enough (or, more specifically, algebraically simple enough -- though that's not a conceptual direction one necessarily has to take).
Quote by strangerep It's been a while since I looked at your papers, and I don't remember the point about "random fields ... introducing the concept of probability in a mathematically correct way". Could you elaborate on the details of this point, and/or give specific places in your earlier papers where you discuss this, please?
I've had considerable trouble getting this across to anyone, although it seems clear as day to me, so I'm happy to try again. If one introduces a path integral approach for particles (though the same fact can be expressed in Hamiltonian formalisms), the path integral is dominated by nowhere differentiable paths (I've just seen this cited from Reed & Simon, which I don't have, but it ties in with my understanding of Hamiltonian methods). This works OK for particle trajectories, but notoriously, people have trouble making path integral methods rigorous in the field context, where there are more infinite limits to be taken. In the field context, I would say that no-one has really adequate mathematical control of the procedure, although some people are happy to say that renormalization is adequate mathematical control.
For a quantum field, some mathematical control (but not enough) is achieved by defining the quantum field to be an operator-valued distribution, not an operator-valued field, so that to construct an operator one has to "average" the quantum field over a finite region. As we consider smaller regions, the variance of such operators diverges, so if we try to talk about the quantum field at a point we find that, more-or-less, we would always observe either +infinity or -infinity, which isn't a good start for constructing a differentiable function. For the vacuum state of a free quantum field, the two point correlation function $$\left\langle 0\right|\widehat{\phi}(x)\widehat{\phi}(y)\left|0\right\rangle$$ is finite for $$x-y$$ non-zero, but diverges as $$x\rightarrow y$$, which is to say that the variance $$\left\langle 0\right|\widehat{\phi}(x)^2\left|0\right\rangle$$ is infinite. It's also to say that the correlation coefficient between the observed values at x and y is finite/infinity=zero, if we relinquish decent control of what limits we're taking. For interacting fields this only gets much worse, of course.
For classical fields, when we introduce the classical probability density $${\normalfont exp}(-\beta H(\phi))$$ of a thermal state we also find ourselves working with classical fields that are nowhere differentiable. I shouldn't say that there's no other way to deal with the situation, it can be managed, but random fields do deal with it pretty well, without introducing anything relatively exotic such as nonstandard analysis, for example.
I've taken it to be useful to consider random fields because they can be presented as random-variable-valued distributions, or even as mutually commutative operator-valued distributions, which are close enough to quantum fields to make comparison of random fields and quantum fields very interesting. In comparison of classical differentiable fields with quantum fields it's hard to know how to start. Part of why this is good to do is that it does give a new way to think about quantum fields, even if the more ambitious hopes I have for my program don't work out.
Recognitions:
Quote by Peter Morgan For a quantum field, some mathematical control (but not enough) is achieved by defining the quantum field to be an operator-valued distribution, not an operator-valued field, so that to construct an operator one has to "average" the quantum field over a finite region. As we consider smaller regions, the variance of such operators diverges, so if we try to talk about the quantum field at a point we find that, more-or-less, we would always observe either +infinity or -infinity, which isn't a good start for constructing a differentiable function. For the vacuum state of a free quantum field, the two point correlation function $$\left\langle 0\right|\widehat{\phi}(x)\widehat{\phi}(y)\left|0\right\rangle$$ is finite for $$x-y$$ non-zero, but diverges as $$x\rightarrow y$$, which is to say that the variance $$\left\langle 0\right|\widehat{\phi}(x)^2\left|0\right\rangle$$ is infinite.
Yep... standard stuff so far. $$\widehat{\phi}(x)$$ are not operators, therefore applying
them to a state vector is technically illegal. They must be smeared with test functions to
give bonafide operators. OK.
I've taken it to be useful to consider random fields because they can be presented as random-variable-valued distributions, or even as mutually commutative operator-valued distributions, which are close enough to quantum fields to make comparison of random fields and quantum fields very interesting.
introduce the concept of probability in a "mathematically correct way".
My perception of your random fields (or should we say "Lie fields"?)
is as an inf-dim Lie algebra parameterized by spacetime points,
as we discussed a while back. But obviously I'm missing some crucial
connection between this and probability. I need you to be more explicit/expansive
on this point if I'm to understand...
Quote by strangerep You didn't really answer the question I asked about how random fields introduce the concept of probability in a "mathematically correct way". My perception of your random fields (or should we say "Lie fields"?) is as an inf-dim Lie algebra parameterized by spacetime points, as we discussed a while back. But obviously I'm missing some crucial connection between this and probability. I need you to be more explicit/expansive on this point if I'm to understand...
Perhaps a more abstract approach? A set of operators $$\{\widehat{\phi}_{f_i}\}$$ generates a *-algebra (to which we add an operator 1, which acts as a multiplicative identity in the *-algebra). A state $$\omega(\widehat{A})$$ over the *-algebra is positive on any operator of the form $$AA^\dagger$$, $$\omega(AA^\dagger)\ge 0$$, and $$\omega(1)=1$$, which allows us to use the GNS-construction of a Hilbert space. We take $$\omega(\widehat{A})$$ to be the expected value associated with the random variable A, corresponding to the operator $$\widehat{A}$$. The sample space associated with A is the set of eigenvalues of $$\widehat{A}$$, and the probability density in the state $$\omega$$ can be written as $$P(x)=\omega(\delta(\widehat{A}-x.1))$$. From this we can generate the characteristic function associated with that probability density as a Fourier transform $$\widetilde{P}(\lambda)=\omega(exp(i\lambda\widehat{A}))$$.
All that is standard QM, albeit not in elementary terms. When we introduce joint observables $$\widehat{A}$$ and $$\widehat{B}$$, the difference between QM and random fields is only whether they always commute, which they do not for QM, but they do for a random field. In the random field case, the function $$\widetilde{P}(\lambda,\mu)=\omega(exp(i\lambda\widehat{A}+i\mu\widehat{ B}))$$ is a joint characteristic function, whereas it is not (in general) the Fourier transform of a positive function in the QM case (unless $$[\widehat{A},\widehat{B}]=0$$, which will be the case if the two operators are constructed using only quantum field operators associated with mutually space-like regions).
I hope this is at an appropriate level and helpful? I'm not sure it's an answer even if the level is OK, in which case sorry.
I realize now that I should also note that IMO a random field and a quantum field are better considered as indexed by smooth functions on space-time, not indexed by space-time points. I find it helpful to think of the index functions as "window functions", which is the name this concept is given in signal processing. Learning to work intuitively with the concept of operator-valued distributions took me several years, but it seems obvious enough by now that I have trouble explaining. Sorry.
Recognitions:
Quote by Peter Morgan [...] the probability density in the state $\omega$ can be written as $$P(x)=\omega(\delta(\widehat{A}-x.1))$$ .
I thought I was conversant with the algebraic approach,
but what is your $\delta$ in the above expression?
Hi strangerep, Thank you very much for the interesting review of IR divergences and references. I am very interested in combining these ideas with the dressed particles approach of Greenberg and Schweber. Do you have any suggestions? Eugene.
Recognitions:
Quote by meopemuk I am very interested in combining these ideas with the dressed particles approach of Greenberg and Schweber. Do you have any suggestions?
I'm wrestling with related questions, but it's too soon for me to say anything.
(And it would probably be too speculative for Physics Forums anyway. :-)
Perhaps after you've had a look through the referenced papers we could
discuss further in a separate thread, or privately.
Quote by strangerep I thought I was conversant with the algebraic approach, but what is your $\delta$ in the above expression?
Hee! It's a Dirac delta, perhaps too quick and dirty as a way to construct a probability density. It's also, formally, the inverse Fourier transform of the characteristic function that follows,
$$\omega(exp(i\lambda\widehat{\phi}_f))= \omega(\sum_{k=0}^{\infty}\left[\frac{(i\lambda\widehat{\phi}_f)^k}{k!}\right])$$.
Except, urp, that there should be a factor of $2\pi$. That seems a better way to introduce it. The expected values of $\widehat{\phi}_f^k$ are the moments of the vacuum state's probability density over $\widehat{\phi}_f$, giving us the characteristic function, which we can formally inverse Fourier transform to obtain the probability density. In practice, one constructs the characteristic function as a scalar function of $\lambda$, which for the free field vacuum would be a Gaussian, which inverse Fourier transforms into a Gaussian probability density.
It does seem that what you have to say about IR divergences and dressed particles is pretty vague at the moment. A "discuss these papers" thread would seem reasonable to me, however, and it's generally a helpful discipline to pay attention to how speculative what we're doing is and to look for ways to rein it in. Indeed, I think that the path to my getting papers into journals is very much about that process, partly because anything that looks speculative is often picked on by referees as a reason to reject a paper that they only have general misgivings about. If you make no speculations, the referee's rejection letter is generally much more helpful, because they have to engage more with the paper to give a clear reason to reject it. At a grosser level, which I have often visited, editors can spot speculative from about a light-year away, so one then doesn't get as far as relatively more detailed feedback from a referee. Getting papers published is just making the speculation look well reasoned --- not getting rid of it, which IMO often makes for a boring paper.
Quote by Peter Morgan Hee! It's a Dirac delta, perhaps too quick and dirty as a way to construct a probability density. It's also, formally, the inverse Fourier transform of the characteristic function that follows, $$\omega(exp(i\lambda\widehat{\phi}_f))= \omega(\sum_{k=0}^{\infty}\left[\frac{(i\lambda\widehat{\phi}_f)^k}{k!}\right])$$. Except, urp, that there should be a factor of $2\pi$.
A citation is possible, page 119 of Itzykson & Zuber, Section 3-1-2, eq. (3-63) does exactly this (in a 1980, McGraw-Hill paperback edition; I don't know whether there are substantially different editions, which is why I'm over-specifying).
Recognitions:
Quote by Peter Morgan page 119 of Itzykson & Zuber, Section 3-1-2, eq. (3-63) does exactly this [...]
OK, so let's see if I now understand what your random fields are...
Your random fields (and their noncommutative quantum generalization) are basically
a generalization of certain concepts in classical statistical mechanics. Actually, let me
quote some stuff from the draft book of Neumaier & Westra, arXiv:0810.1019v1,
that (I think) relates to this way of looking at things...
(This is from their sect 1.2...)
An important ingredient in statistical mechanics is a phase space density ρ playing the role of a measure to calculate probabilities; the expectation value of a function f is given by $$\langle f \rangle ~=~ \int \rho f ~~~~~~~~~~~~~~ (1.1)$$ where the integral indicates integration with respect to the so-called Liouville measure in phase space. In the quantum version of statistical mechanics the density ρ gets replaced by a linear operator ρ on Hilbert space called the density matrix, the functions become linear opera- tors, and we have again (1.1), except that the integral is now interpreted as the quantum integral, $$\int f ~=~ tr f, ~~~~~~~~~~~~~~ (1.2)$$ where tr f denotes the trace of a trace class operator. We shall see that the algebraic properties of the classical integral and the quantum integral are so similar that using the same name and symbol is justified.
But (iiuc) a difference between this approach and yours is that, whereas classical
quantities f are normally interpreted as functions over phase space (hence the
Liouville measure above), your random fields are just over 4D spacetime (or rather
over a space of test functions over 4D spacetime). (?)
So I'm now trying to follow your criticism of Navascues and Wunderlich more carefully...
But... in the online version (arXiv:0907.0372), which is all I have access to right now,
I can't relate your quotes to their section numbering. I also can't find the mention
of "continuous fields"? Is the Proc Roy Soc version different from the online version,
and you were commenting on the former?
Quote by strangerep But (iiuc) a difference between this approach and yours is that, whereas classical quantities f are normally interpreted as functions over phase space (hence the Liouville measure above), your random fields are just over 4D spacetime (or rather over a space of test functions over 4D spacetime). (?)
Yes, my definition and constructions are manifestly Lorentz and translation invariant. The usual definition is Lorentz and translation invariant, but not manifestly so. The usual phase space approach, is only possible if only mass shell components of a test function contribute. The restriction to only a single mass shell (if that's what is wanted) is implemented in my approach by the inner product having a delta function restriction to the mass shell.
So I'm now trying to follow your criticism of Navascues and Wunderlich more carefully... But... in the online version (arXiv:0907.0372), which is all I have access to right now, I can't relate your quotes to their section numbering. I also can't find the mention of "continuous fields"? Is the Proc Roy Soc version different from the online version, and you were commenting on the former?
The Proc. Roy. Soc. A version is different, which I didn't discover until after I submitted my comment. The Proc. Roy. Soc. A version is available for free, I think because of the Royal Society's anniversary celebrations. Go to http://dx.doi.org/ and enter the DOI that's in my paper, 10.1098/rspa.2009.0453, or go straight to the Proc. Roy. Soc. A page, http://rspa.royalsocietypublishing.o...t/466/2115/881. [You won't find any citation to the arXiv version in my paper, the arXiv administration added it to the arXiv abstract, in their wisdom.]
Recognitions:
Quote by Peter Morgan [...] the Proc. Roy. Soc. A page, http://rspa.royalsocietypublishing.o...t/466/2115/881.
At the end of your comment paper, you say:
Navascués and Wunderlich have done something rather remarkable. By introducing the idea of continuous fields in their paper they have laid themselves open to a criticism that they must introduce random fields, and encourage a discussion that would otherwise have been mpossible. If they had introduced fields without daring to go “beyond” the standard model, they would equally have been conventionally impervious. Navascués’ and Wunderlich’s paper requires a vigorous condemnation where something less ambitious would have gone unchallenged. Finally, this comment does not touch Navascués’ and Wunderlich’s argument; their paper’s flaw, I think, and such little as it is, is to have introduced a classical field metaphysics and not to have thought enough of it. [...]
In this one paragraph, you say "requires a vigorous condemnation" but then say
"this comment does not touch Navascués’ and Wunderlich’s argument". So... you're not
actually arguing against the essential results of NW's paper? But only the "little" flaw
of using the phrase "continuous fields" rather than "random fields" ?
Quote by strangerep In [the last] paragraph, you say "requires a vigorous condemnation" but then say "this comment does not touch Navascués’ and Wunderlich’s argument". So... you're not actually arguing against the essential results of NW's paper? But only the "little" flaw of using the phrase "continuous fields" rather than "random fields"?
Mixed messages indeed. I was cross at Navascués’ and Wunderlich’s assumptions, not at their argument. When I saw your quote, "requires a vigorous condemnation", I thought you might have quoted me out of context, because I thought I surely must have mentioned that it was their assumptions that require a vigorous condemnation, but I see that I was vigorous at their whole paper.
Getting the politeness right in a critical comment apparently evaded me, but I do think these are serious Physicists, running with a respectable idea, that we might look at what the Correspondence Principle might tell us about the Planck scale and beyond. I think it's a good idea to do that in principle, but I'm telling them that "Oh, I wouldn't start from there". That could be boring.
What saves their paper, I think, and what made it possible for me to make my comment constructive, I hope, is that they introduce classical continuous fields. They do it half-heartedly, and they might even have been made to introduce fields because the referee said, "well, but what about fields?", but they do it. This is a potentially significant development, because the ways in which classical fields might be used to model Physics is underdeveloped. Good math approaches to QFT take it almost for granted that QFT is about
fields, not about particles, particularly because of the Unruh effect, since about 20 years ago, say.
Very few Physicists, however, take the obvious next step, which is that in that case we'd better find ways to talk about fields instead of about particles. A notable exception is Art Hobson, whose web-site has available a copy of the paper of his that I cite. He's concerned with how to teach QFT, and proposes to do it by emphasizing a field perspective. Andrei Khrennikov, who is a Mathematical Physicist who turned seriously to foundations of Physics about ten years ago, takes a similar line, but his mathematical methods and mine are very different. 't Hooft's approach is also similar but different, as also for Wolfram. Elze and Wetterich are two other hard Mathematical Physicists who are developing entirely different formalisms. Khrennikov, Elze, and Wetterich are developing fairly traditional stochastic differential equation methods, 't Hooft and Wolfram are developing finite automata models, in which the statistics are generated by simulation; I'm the only one, to my knowledge, who is seriously developing an algebraic presentation of random fields. A friend, Ken Wharton, is developing a view based on classical fields, with me trying to persuade him that he has to introduce probability in a mathematically decent way, and him dragging his heels, perfectly reasonably, because he doesn't like a metaphysics that includes probability. If I have a metaphysics, it is a metaphysics of statistics and ensembles rather than of probability, with my being content with a relatively loose, somewhat post-positivist relationship between observed statistics and the mathematics of probability, but we've been negotiating this fine point for a while. All this move to fields, and lattices, has more-or-less started to happen in the last ten years (although there's also Stochastic Electrodynamics, dating from the 60s, and Nelson's approach, too, from the 70s, but these are arguably problematic because they are preoccupied with fermions being particles, bosons being fields, and these programs, although I believe always continuing, have had significant hiatuses).
As far as all these different approaches are concerned, I consider that mine has the most to gain from comparison with QFT, in a Correspondence Principle sort of way, because I can even show that a free complex quantum field is empirically equivalent to a free random field, so I'm especially happy to see NW talking about CP. Nonetheless, I would only claim that my approach gives a useful counterpoint to stochastic or lattice methods, not that my approach is correct. I wish not to claim that the world is continuous rather than discrete, for example. A random field, properly speaking, is only an indexed set of random variables, it is only associated with a continuous space-time if we specifically take the index set to be the Schwarz space of functions on space-time (or some other well enough controlled function space on space-time).
So why should NW be pulled up for this rather than someone else, given that almost no-one pays any attention to how their use of particle-talk conditions their thinking? Their fault, I suppose, is that they mention fields so glibly, without thinking about how rich the seam is, and proceed with a discussion that would make almost no sense if they tried to accommodate both particle and field ways of thinking. At the very least, their conclusions would have to be hedged with a statement such as "if we think only in terms of particles, ...".
NW's discussion of Bell inequalities is similarly conventional. If they did the job properly, they would know that the flow of ideas surrounding Bell inequalities has been shifting dramatically over the last 10 years, with roots that go back to about 1980. The relative significances of contextuality and of locality are gradually being teased out more and more clearly. Anybody trying to talk about Bell inequalities should at least acknowledge those currents, and again they should either accommodate the various possibilities or explicitly hedge their conclusions.
As the last sentence of the abstract says, "Whether we discuss physics in terms of particles or in terms of events and (random) fields leads to differences that a glance would be well to notice." Perhaps I might add, even more facetiously, "or at least what is not noticed ought to be mentioned", but that would go far enough that I imagine the editors would have sent it back to me unrefereed. As it is, my comment is with referees; I hope they see that my criticism is constructive.
Recognitions:
"classical fields" [...] "continuous fields" [...] "random fields" [...]
When I first read NW's sentence where they mention "continuous fields"
my first thought was "what _precisely_ do they mean by that phrase"?
(Such pedantic detail becomes important in discussions about
"introducing probability in a mathematically correct way"...)
So let me ask you the question...
You've explained in earlier posts what you mean by a random field
(i.e., an inf-dim commutative *-algebra, with basis elements indexed
from a space of well-behaved functions over spacetime, such that
state functionals over this algebra make sense).
What then are your definitions of the phrases "classical field"
and "continuous field" ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105813264846802, "perplexity": 861.3728278233418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704133142/warc/CC-MAIN-20130516113533-00054-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://link.springer.com/chapter/10.1007%2F3-540-55599-4_120 | Lecture Notes in Computer Science Volume 605, 1992, pp 725-732
Date: 14 Jul 2005
Repeated matrix squaring for the parallel solution of linear systems
* Final gross prices may vary according to local VAT.
Abstract
Given a n×n nonsingular linear system Ax=b, we prove that the solution x can be computed in parallel time ranging from Ω(log n) to O(log2 n), provided that the condition number, μ(A), of A is bounded by a polynomial in n. In particular, if μ(A) = O(1), a time bound O(log n) is achieved. To obtain this result, we reduce the computation of x to repeated matrix squaring and prove that a number of steps independent of n is sufficient to approximate x up to a relative error 2−d, d=O(1). This algorithm has both theoretical and practical interest, achieving the same bound of previously published parallel solvers, but being far more simple.
This work has been partly supported by the Italian National Research Council, under the “Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo”, subproject 2 “Processori dedicati”. Part of this work was done while the first author was with the Istituto di Elaborazione dell'Informazione, Consiglio Nazionale delle Ricerche, Pisa (Italy). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320192933082581, "perplexity": 1038.314701872396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461359.90/warc/CC-MAIN-20150226074101-00143-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=196081 | # Concentration of ammonia in a solution
by sveioen
Tags: ammonia, concentration, solution
P: 14 Hello all, I had chemistry a long time ago, but now I am very rusty at it so I am hoping you can get me started with this problem I have; A particular solution of ammonia (Kb = 1.8 x 10-5) has a pH of 8.3. What is the concentration of ammonia in this solution? Is it the concentration of NH3 I have to find? I know I can find [OH-] since I know the pH, but what does the final equation look like? Something like $$Kb=[OH-][NH4+]/[NH3]$$? Thank you for any help!
P: 76 NH4 and OH- is going to have same amount of equilibrium concentration gained from the NH3. So If you know the pH, then how do you find the pOH, and what is the concentration of OH-? Multiply both sides by NH3 and divide bothsides by Kb. What happens?
P: 14 Ok, so pOH = 14 - pH = 14 - 8,3 = 5,7. Concentration of OH- and NH4 is therefore $$1,995 \times 10^{-6}$$? And then $$[NH3] = \frac{[OH^-][NH4^+]}{K_b}$$?
P: 76 Concentration of ammonia in a solution So plug the values and see what you get, I hope this answer agree with the true answer, does it?? If not tell me.
P: 14 I got $$2,21 \times 10^{-7}$$, which seems reasonable I guess. Maybe a bit low?!
P: 76 You don't have the answer? It should be reasonable right? because its the equilibrum concentration right?
P: 14 Nope dont have answer (yet) :(, but it seems kinda right.. Probably is equilibrum concentration..
P: 1,521
Quote by sveioen I got $$2,21 \times 10^{-7}$$, which seems reasonable I guess. Maybe a bit low?!
I got $$2,21 \times 10^{-6}$$ instead.
Related Discussions Biology, Chemistry & Other Homework 3 Biology, Chemistry & Other Homework 6 Biology, Chemistry & Other Homework 2 Biology, Chemistry & Other Homework 3 Chemistry 6 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860188722610474, "perplexity": 1292.9459322044977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500837094.14/warc/CC-MAIN-20140820021357-00237-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.chemeurope.com/en/encyclopedia/Fin_(extended_surface).html | My watch list
my.chemeurope.com
# Fin (extended surface)
In the study of heat transfer, a fin is a surface that extends from an object to increase the rate of heat transfer to or from the environment by increasing convection. The amount of conduction, convection, or radiation of an object determines the amount of heat it transfers. Increasing the temperature difference between the object and the environment, increasing the convection heat transfer coefficient, or increasing the surface area of the object increases the heat transfer. Sometimes it is not economical or it is not feasible to change the first two options. Adding a fin to an object, however, increases the surface area and can sometimes be an economical solution to heat transfer problems.
## Simplified Case
To create a simplified equation for the heat transfer of a fin, many assumptions need to be made.
Assume:
2. Constant material properties (independent of temperature)
3. No heat transfer
4. No internal heat generation
5. One-dimensional conduction
6. Uniform cross-sectional area
7. Uniform convection across the surface area
With these assumptions, the conservation of energy can be used to create an energy balance for a differential cross section of the fin.[1]
$q_x=q_{x+dx}+dq_{conv}\,$
Fourier’s law states that
$q_x=-kA_c \left ( \frac{dT}{dx} \right )$,
where Ac is the cross-sectional area of the differential element.[2] Therefore the conduction rate at x+dx can be expressed as
$q_{x+dx}=q_x+\left ( \frac{dq_x}{dx} \right )dx$
Hence, it can also be expressed as
$q_{x+dx}=-kA_c\left ( \frac{dT}{dx} \right )-k\frac{d}{dx} \left ( A_c\frac{dT}{dx} \right )dx$.
Since the equation for heat flux is
$q''=h\left (T_s-T_\infty\right )$
then dqconv is equal to
$h*dA_s \left(T-T_\infty\right)$
where As is the surface area of the differential element. By substitution it is found that
$\frac{d^2T}{dx^2} = \left (\frac{1}{A_c}\frac{dA_c}{dx} \right )\frac{dT}{dx} - \left (\frac{1}{A_c} \frac{h}{k} \frac{dA_s}{dx} \right ) \left ( T - T_\infty \right )$
This is the general equation for convection from extended surfaces. Applying certain boundary conditions will allow this equation to simplify.
## Four Uniform Cross-sectional Area Cases
For all four cases, the above equation will simplify because the area is constant and
$\frac {dA_s}{dx} = P$
where P is the perimeter of the cross-sectional area. Thus, the general equation for convection from extended surfaces with constant cross-sectional area simplifies to
$\frac{d^2T}{dx^2}=\frac{hP}{kA_c}\left(T-T_\infty\right)$.
The solution to the simplified equation is
θ(x) = C1emx + C2emx
where $m^2=\frac{hP}{kA_c}$
and $\theta_x=T(x)-T_\infty$.
The constants C1 and C2 can be found by applying the proper boundary conditions. All four cases have the boundary condition T(x = 0) = Tb for the temperature at the base. The boundary condition at x = L, however, is different for all of them, where L is the length of the fin.
For the first case, the second boundary condition is that there is free convection at the tip. Therefore,
$hA_c\left(T(L)-T_\infty\right)=-kA_c\left.\left(\frac{dT}{dx}\right)\right\vert_{x=L}$
which simplifies to
$h\theta(L)=-k\left.\frac{d\theta}{dx}\right\vert_{x=L}$
Knowing that
$\theta_b=T_{base}-T_\infty$,
the equations can be combined to produce
$h\left(C_1e^{mL}+C_2e^{-mL}\right)=km\left(C_2e^{-mL}-C_1e^{mL}\right)$
C1 and C2 can be solved to produce the temperature distribution, which is in the table below. Then applying Fourier’s law at the base of the fin, the heat transfer rate can be found.
Similar mathematical methods can be used to find the temperature distributions and heat transfer rates for other cases. For the second case, the tip is assumed to be adiabatic or completely insulated. Therefore at x=L,
$\frac{d\theta}{dx}=0$
because heat flux is 0 at an adiabatic tip. For the third case, the temperature at the tip is held constant. Therefore the boundary condition is
θ(L) = θL. For the fourth and final case, the fin is assumed to be infinitely long. Therefore the boundary condition is
$\lim_{L\rightarrow \infty} \theta_L=0\,$.
The temperature distributions and heat transfer rates can then be found for each case.
Temperature Distribution and Heat Transfer Rate for Fins of Uniform Cross Sectional Area
Case Tip Condition (x=L) Temperature Distribution Fin Heat Transfer Rate
A Convection heat transfer $\frac{\theta}{\theta_b}=\frac{\cosh{m(L-x)}+\left(\frac{h}{mk}\right)\sinh {m(L-x)}}{\cosh{mL}+\left(\frac{h}{mk}\right)\sinh{mL}}$ $\sqrt{hPkA_c}\theta_b\frac{\sinh {mL} + (h/mk) \cosh {mL}}{\cosh {mL} + (h/mk) \sinh {mL}}$
B Adiabatic $\frac{\theta}{\theta_b}=\frac{\cosh {m(l-x)}}{\cosh {mL}}$ $\sqrt{hPkA_c}\theta_b\tanh {mL}$
C Constant Temperature $\frac{\theta}{\theta_b}=\frac{\frac{\theta_L}{\theta_b}\sinh {mx} + \sinh {m(L-x)}}{\sinh {mL}}$ $\sqrt{hPkA_c}\theta_b\frac{\cosh {mL} - frac{\theta_L}{\theta_b}}{\sinh {mL}}$
D Infinite Fin Length $\frac{\theta}{\theta_b}=e^{-mx}$ $\sqrt{hPkA_c}\theta_b$
## Fin Performance
Fin performance can be described in three different ways. The first is fin effectiveness. It is the ratio of the fin heat transfer rate to the heat transfer rate of the object if it had no fin. The formula for this is
εf = fracqfhAc,bθb,
where Ac,b is the fin cross-sectional area at the base. Fin performance can also be characterized by fin efficiency. This is the ratio of the fin heat transfer rate to the heat transfer rate of the fin if the entire fin were at the base temperature.
ηf = qfhAfθb
Af in this equation is equal to the surface area of the fin. Fin efficiency will always be less than one. This is because assuming the temperature throughout the fin is at the base temperature would increase the heat transfer rate. The third way fin performance can be described is with overall surface efficiency.
$\eta_o=\frac{q_t}{hA_t\theta_b}$,
where At is the total area and qt is the sum of the heat transfer rates of all the fins. This is the efficiency for an array of fins.
## Fin Uses
Fins are most commonly used in heat exchanging devices such as radiators in cars and heat exchangers in power plants.[3][4] They are also used in newer technology such as hydrogen fuel cells.[5] Nature has also taken advantage of the phenomena of fins. The ears of jackrabbits act as fins to release heat from the blood that flows through them.[6]
## References
• Incropera, Frank; DeWitt, David P., Bergman, Theodore L., Lavine, Adrienne S. (2007). Fundamentals of Heat and Mass Transfer, Sixth Edition, New York: John Wiley & Sons, 2-168. ISBN 0-471-45728-0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950493931770325, "perplexity": 661.3612706264886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646269.50/warc/CC-MAIN-20141024030046-00229-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://mathhelpforum.com/new-users/227108-second-derivative.html | # Math Help - Second Derivative
1. ## Second Derivative
If the second derivative is continually increasing over the interval [-2,1] on h(x), what does that tell us about the first derivative, h'(x)? Thanks!
2. ## Re: Second Derivative
the first derivative, h'(x), will be monotonically increasing on that interval.
3. ## Re: Second Derivative
Originally Posted by state
If the second derivative is continually increasing over the interval [-2,1] on h(x), what does that tell us about the first derivative, h'(x)?
Consider the function: $h(x)=-e^{-x}$. See its plot here.
Please note that $h(x)=h''(x)$, hence the second derivative is increasing on $[-2,1]$ but what about $h'(x)=e^{-x}~?$.
4. ## Re: Second Derivative
ignore post #2. It's incorrect. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9118662476539612, "perplexity": 1955.5467116269035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00249-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://nips.cc/Conferences/2022/ScheduleMultitrack?event=55137 | Timezone: »
Poster
Your Transformer May Not be as Powerful as You Expect
Shengjie Luo · Shanda Li · Shuxin Zheng · Tie-Yan Liu · Liwei Wang · Di He
Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #219
Relative Positional Encoding (RPE), which encodes the relative distance between any pair of tokens, is one of the most successful modifications to the original Transformer. As far as we know, theoretical understanding of the RPE-based Transformers is largely unexplored. In this work, we mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions. One may naturally assume the answer is in the affirmative---RPE-based Transformers are universal function approximators. However, we present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is. One key reason lies in that most RPEs are placed in the softmax attention that always generates a right stochastic matrix. This restricts the network from capturing positional information in the RPEs and limits its capacity. To overcome the problem and make the model more powerful, we first present sufficient conditions for RPE-based Transformers to achieve universal function approximation. With the theoretical guidance, we develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions. Therefore, the corresponding URPE-based Transformers become universal function approximators. Extensive experiments covering typical architectures and tasks demonstrate that our model is parameter-efficient and can achieve superior performance to strong baselines in a wide range of applications. The code will be made publicly available at https://github.com/lsj2408/URPE. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085325002670288, "perplexity": 1802.8025938988958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00006.warc.gz"} |
http://cpr-astrophsr.blogspot.com/2013/01/13013813-andrew-j-steffl-et-al.html | ## Cassini UVIS observations of the Io plasma torus. II. Radial variations [PDF]
Andrew J. Steffl, Fran Bagenal, A. Ian F. Stewart
On January 14, 2001, shortly after the Cassini spacecraft's closest approach to Jupiter, the Ultraviolet Imaging Spectrometer (UVIS) made a radial scan through the midnight sector of Io plasma torus. The Io torus has not been previously observed at this local time. The UVIS data consist of 2-D spectrally dispersed images of the Io plasma torus in the wavelength range of 561{\AA}-1912{\AA}. We developed a spectral emissions model that incorporates the latest atomic physics data contained in the CHIANTI database in order to derive the composition of the torus plasma as a function of radial distance. Electron temperatures derived from the UVIS torus spectra are generally less than those observed during the Voyager era. We find the torus ion composition derived from the UVIS spectra to be significantly different from the composition during the Voyager era. Notably, the torus contains substantially less oxygen, with a total oxygen-to-sulfur ion ratio of 0.9. The average ion charge state has increased to 1.7. We detect S V in the Io torus at the 3{\sigma} level. S V has a mixing ratio of 0.5%. The spectral emission model used in can approximate the effects of a non-thermal distribution of electrons. The ion composition derived using a kappa distribution of electrons is identical to that derived using a Maxwellian electron distribution; however, the kappa distribution model requires a higher electron column density to match the observed brightness of the spectra. The derived value of the kappa parameter decreases with radial distance and is consistent with the value of {\kappa}=2.4 at 8 RJ derived by the Ulysses URAP instrument (Meyer-Vernet et al., 1995). The observed radial profile of electron column density is consistent with a flux tube content, NL^2, that is proportional to r^-2.
View original: http://arxiv.org/abs/1301.3813 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875509142875671, "perplexity": 2114.9094970648616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826283.88/warc/CC-MAIN-20171023183146-20171023203146-00739.warc.gz"} |
https://iwaponline.com/jwh/article-abstract/doi/10.2166/wh.2019.103/71710/Monitoring-hospital-wastewaters-for-their-probable | ## Abstract
Hospitals' effluents contain a considerable amount of chemicals. Considering the significant volume of wastewater discharged by hospitals, the presence of these chemicals represents a real threat to the environment and human health. Thus, the aim of this study was to evaluate the in vivo and in vitro genotoxicities of three wastewater effluents collected from Tunisian hospitals. The liver of Swiss albino male mice, previously treated with different doses of the hospital wastewaters, was used as a model to detect DNA fragmentation. Our results showed all the hospital effluents caused significant qualitative and quantitative hazards in hepatic DNA. The wastewater collected from Sfax hospital exhibited the highest genotoxic effect, which may be explained by the presence in this effluent of some toxic micropolluants. There was a significant increase in genotoxicity, proportionally to the concentration of effluent. However, the vitotox assay did not show any significant genotoxicity on Salmonella typhimurium TA104 in the presence or absence of microsomal fraction S9. The ratio gentox/cytox was lower than the threshold 1.5. This study assessed the toxicological risk issued from Tunisian hospital wastewaters, which is potentially very harmful, and it has been pointed out that wastewater treatment requires special attention. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194971680641174, "perplexity": 4638.0068797797085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00732.warc.gz"} |
https://www.missingdata.nl/missing-data/missing-data-methods/imputation-methods/ | # Single imputation methods
Single imputation denotes that the missing value is replaced by a value. In this method the sample size is retrieved. However, the imputed values are assumed to be the real values that would have been observed when the data would have been complete. When we have missing data, this is never the case. We can never be completely certain about imputed values. Therefore this missing data uncertaintly should be incorporated as is done in multiple imputation
## Mean imputation
Mean imputation is a method in which the missing value on a certain variable is replaced by the mean of the available cases. This method maintains the sample size and is easy to use, but the variability in the data is reduced, so the standard deviations and the variance estimates tend to be underestimated. The magnitude of the covariances and correlation also decreases by restricting the variability and this method often causes biased estimates, irrespective of the underlying missing data mechanism (Enders, 2010; Eekhout et al, 2013). In the example below you can see the relation between x and y when the mean value is imputed for the missing values on y.
For missings on multi-item questionnaires, mean imputation can be applied at the item level. One option is to impute the missing item scores with the item mean for each item. In that case the average of the respondents with observed scores for each item is computed and that average value is imputed for respondents with a missing score. Another option is to impute the person mean. In that method the average of the observed item scores for each respondent is computed and that average is imputed for the item scores that are missing for that respondent. This option is also called average of the available items. Both these methods result in biased analysis results, especially when missing data are not MCAR (Eekhout et al. 2013). Nevertheless, these methods are often advised in questionnaire manuals.
Another method, that combines item mean imputation and person mean imputation is two-way imputation. In this method the imputed value is calculated by adding the person mean to the item mean and subtract the overall mean from that score (van Ginkel et al. 2010).
## Regression imputation
In single regression imputation the imputed value is predicted from a regression equation. For this method the information in the complete observations is used to predict the values of the missing observations. Regression assumes that the imputed values fall directly on a regression line with a nonzero slope, so it implies a correlation of 1 between the predictors and the missing outcome variable. Opposing the mean substitution method, regression imputation will overestimate the correlations, however, the variances and covariances are underestimated.
Stochastic regression imputation aims to reduce the bias by an extra step of augmenting each predicted score with a residual term. This residual term is normally distributed with a mean of zero and a variance equal to the residual variance from the regression of the predictor on the outcome. As you can see in the video below, the error that is added to the predicted value from the regression equation is drawn from a normal distribution. This way the variability in the data is preserved and parameter estimates are unbiased with MAR data. However, the standard error tends to be underestimated, because the uncertainty about the imputed values is not included, which increases the risk of type I errors (Enders, 2010).
## Matching methods
Hot-deck imputation is a technique where non-respondents are matched to resembling respondents and the missing value is imputed with the score of that similar respondent (Roth, 1994). Two hot-deck approaches are the distance function approach and the pattern matching approach. The distance function approach, or nearest neighbor approach, imputes the missing value with the score of the case with the smallest squared distance statistic to the case with the missing value. The matching pattern method is more common, where the sample is stratified in separate homogenous groups. The imputed value for the missing case is randomly drawn from cases in the same group (Fox-Wasylyshyn & El-Masri, 2005). Hot-deck imputation replaces the missing data by realistic scores that preserve the variable distribution. However it underestimates the standard errors and the variability (Roth, 1994). Hot-deck imputation is especially common in survey research (Little & Rubin, 2002).
## Last observation caried forward
The last value carried forward method is specific to longitudinal designs. This technique imputes the missing value with the last observation of the individual. This method makes the assumption that the observation of the individual has not changed at all since the last measured observation, which is mostly unrealistic (Wood, White & Thompson, 2004). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571830630302429, "perplexity": 734.2844542177678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00345.warc.gz"} |
https://stanleyriiks.wordpress.com/tag/goblin-like/ | ## FORTRESS FRONTIER By Myke Cole – Reviewed
Posted in Reviews, Uncategorized with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on August 5, 2013 by stanleyriiks
A bit of context for those who haven’t read the first book yet: this is like the X-Men, but with magic instead of mutations. When people develop magical abilities, or come up “latent”, things develop swiftly, and they have to report to the Police, otherwise they will be arrested as “Selfers”. There are some prohibited types of magic too dangerous to be allowed in society, the government use these “probes” as part of a secret service in an alternative dimension called the Source, to fight the indigenous population of goblin-like creatures. This is where Oscar Britton is sent, for training and indoctrination, when he develops “probe” magic and accidentally kills his father. But things don’t work out quite as planned for Britton and the Shadow Coven…
The first book in the Shadow Ops series left the remaining renegade sorcerers of Shadow Coven surrounded by goblins after FOB (Forward Operating Base) Frontier was partially destroyed, the witch Scylla was freed, and a massive battle had taken place. If you’re expecting this book to pick up straight after that then you’ll be disappointed. For the most part this is the story of Colonel Alan Bookbinder, Pentagon administrator, who turns up latent, but his magical abilities fail to fully develop. Despite this, he is sent to Frontier, where he becomes the second in command. The timelines of this and the first book overlap, as Bookbinder arrives before Britton and the rest of Shadow Coven go rogue. But when that does happen we see the other side of the action, as the base is left devastated and with no contact or supplies from the home plane (Earth). With rapidly depleted stocks of ammo and food the base becomes desperate and the goblin attacks increase daily. Bookbinder and a small team head out into the wilderness to try to find an Indian base hundreds of kilometres away, their lives on the line, and they are the only hope of survival for those left in the partially destroyed base.
Britton and Shadow Coven do play a part in the story, we get an update about half-way through and then Britton and his team are involved towards the end of the book, tying everything nicely together and preparing the reader for the third book in the series.
This is an SF military action thriller with magic thrown in for good measure. Although it doesn’t have the new and shiny feel of the first book, and the lack of my favourite character Marty (A Dobby-like good goblin), mean this feels a little like the second book in the series (the necessary part between the beginning and the climax [is this a trilogy?], continuing the story and an integral part, but not really adding a great deal.
Those who enjoyed the first book, and that should be plenty, because it’s pretty bloody good, should come to this with an open-mind and they’ll enjoy this slightly different but linked second part. Those expecting the continued story of Britton and Shadow Coven may be a little disappointed by the new direction.
Good fun, but not as good as the first book. I still want to know what happens next, and expect at some point a full-scale war between the sorcerers and the military, and possibly civil war!
## CONTROL POINT By Myke Cole – Reviewed
Posted in Reviews with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on April 10, 2013 by stanleyriiks
Oscar Britton is an army officer, and when he and his team are called to deal with a prohibited latent, they have a hell of a time. A latent is a person who develops magical powers. Certain magical powers are prohibited as too dangerous. One of Britton’s men is half killed by fire demons, and two teenage latents are shot dead, a school is burned, and Oscar has an argument with a sorcerer.
A few hours later Oscar has a latent episode, finding himself on the other side of the law. Knowing he has a prohibited magical power (opening wormhole-like gates) he goes on the run.
What follows is actually even more exciting and action packed than the beginning. As Oscar is “recruited” as a contractor for the army, and must face the tough challenges of learning to control his power on the front-line of a war with goblin-like creatures.
This doesn’t really have a slew of original ideas, but it’s put together very well, creating that newness and excitement. The military and magic are juxtaposed, and Oscar and his team work together to discover their powers and use them for good, despite the military’s view of them as weapons.
The book is a cross between Harry Potter and Stripes, or Biloxi Blues. The unique mix of military and magic makes this book. There is a little too much concentration on Oscar’s struggle to deal with his new power and his manipulation by the military, but that serves its own purpose and works within the context of the story. A kind of coming-of-age tale, using all the best bits of a military story, but a little fantasy thrown in for good measure. You can’t help but love little Marty, the goblin. There is plenty of action to speed things along.
Intelligent, exciting, pulse-racing and action packed. Full-on magical military mayhem. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180609941482544, "perplexity": 1251.2819800982056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00144.warc.gz"} |
http://mathonline.wikidot.com/chain-rule-to-functions-of-several-variables-examples-1 | Applying The Chain Rule to Functions of Several Variables Examples 1
# Applying The Chain Rule to Functions of Several Variables Examples 1
We saw some examples of applying the various chain rules for functions of several variables on the Applying The Chain Rule to Functions of Several Variables page. We will now look at some more examples. Once again, here are the generic chain rules for reference:
If $z = f(x, y)$ has continuous first partial derivatives and $x = x(t)$ and $y = y(t)$ are differentiable then:
(1)
\begin{align} \quad \frac{dz}{dt} = \frac{\partial z}{\partial x} \frac{dx}{dt} + \frac{\partial z}{\partial y} \frac{dy}{dt} \end{align}
If $z = f(x, y)$ is a two variable real-valued function with continuous first partial derivatives, and $x = x(s, t)$ and $y = y(s, t)$ are functions of $s$ and $t$ then:
(2)
\begin{align} \quad \frac{\partial z}{\partial s} = \frac{\partial z}{\partial x} \frac{\partial x}{\partial s} + \frac{\partial z}{\partial y} \frac{\partial y}{\partial s} \end{align}
(3)
\begin{align} \quad \frac{\partial z}{\partial t} = \frac{\partial z}{\partial x} \frac{\partial x}{\partial t} + \frac{\partial z}{\partial y} \frac{\partial y}{\partial t} \end{align}
## Example 1
Suppose that $w = f(x, y)$, $x = g(r, s)$, $y = h(r, t)$, $r = k(s, t)$, and $s = m(t)$. Compute $\frac{\partial w}{\partial t}$.
This problem has many layers with it. We start with the outermost layer by applying the Chain Rule Type 2:
(4)
\begin{align} \quad \frac{\partial w}{\partial t} = \frac{\partial w}{\partial x} \frac{\partial x}{\partial t} + \frac{\partial w}{\partial y} \frac{\partial y}{\partial t} \\ \quad \frac{\partial w}{\partial t} = f_1(x, y) \frac{\partial x}{\partial t} + f_2(x, y) \frac{\partial w}{\partial y} \end{align}
Now we must compute both $\frac{\partial x}{\partial t}$ and $\frac{\partial y}{\partial t}$. Applying the Chain Rule again and we have that:
(5)
\begin{align} \quad \frac{\partial x}{\partial t} = \frac{\partial x}{\partial r} \frac{\partial r}{\partial t} + \frac{\partial x}{\partial s} \frac{\partial s}{\partial t} \\ \quad \frac{\partial x}{\partial t} = g_1(r, s) \frac{\partial r}{\partial t} + g_2(r, s) \frac{\partial s}{\partial t} \\ \quad \frac{\partial x}{\partial t} = g_1(r, s) \left ( \frac{\partial r}{\partial s} \frac{\partial s}{\partial t} + \frac{\partial r}{\partial t} \frac{\partial t}{\partial t} \right ) + g_2(r, s) m'(t) \\ \quad \frac{\partial x}{\partial t} = g_1(r, s) \left ( k_1(s, t) m'(t) + k_2(s, t) \cdot 1\right ) + g_2(r, s) m'(t) \\ \quad \frac{\partial x}{\partial t} = g_1(r, s) \left ( k_1(s, t) m'(t) + k_2(s, t)\right ) + g_2(r, s) m'(t) \end{align}
(6)
\begin{align} \quad \frac{\partial y}{\partial t} = \frac{\partial y}{\partial r} \frac{\partial r}{\partial t} + \frac{\partial y}{\partial t} \frac{\partial t}{\partial t} \\ \quad \frac{\partial y}{\partial t} = h_1(r, t) \frac{\partial r}{\partial t} + h_2(r, t) \cdot 1 \\ \quad \frac{\partial y}{\partial t} = h_1(r, t) \left ( \frac{\partial r}{\partial s} \frac{\partial s}{\partial t} + \frac{\partial r}{\partial t} \frac{\partial t}{\partial t} \right ) + h_2(r, t) \\ \quad \frac{\partial y}{\partial t} = h_1(r, t) \left ( k_1(s, t) m'(t) + k_2(s, t) \cdot 1 \right ) + h_2(r, t) \\ \quad \frac{\partial y}{\partial t} = h_1(r, t) \left ( k_1(s, t) m'(t) + k_2(s, t) \right ) + h_2(r, t) \\ \end{align}
Putting this all together and we get that:
(7)
\begin{align} \quad \frac{\partial w}{\partial t} = f_1(x, y) \left [ g_1(r, s) \left ( k_1(s, t) m'(t) + k_2(s, t)\right ) + g_2(r, s) m'(t) \right ] + f_2(x, y) \left [ h_1(r, t) \left ( k_1(s, t) m'(t) + k_2(s, t) \right ) + h_2(r, t) \right ] \end{align}
## Example 2
Compute the partial derivative $\frac{\partial}{\partial x} f(2x, 3y)$ and $\frac{\partial}{\partial x} f(2y, 3x)$.
Let's first compute $\frac{\partial}{\partial x} f(2x, 3y)$. If we let $u(x) = 2x$ and $v(y) = 3y$ then we have that $f(2x, 3y) = f(u(x), v(y))$. Applying the Chain Rule and we have that:
(8)
\begin{align} \quad \frac{\partial}{\partial x} f(2x, 3y) = \frac{\partial}{\partial x} f(u, v) = \frac{\partial f}{\partial u} \frac{\partial u}{\partial x} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial x} = 2f_1(u, v) + 0f_2(u, v) = 2f_1(u,v) = 2f_1(2x, 3y) \end{align}
Now let's compute $\frac{\partial}{\partial x} f(2y, 3x)$. If we let $w(y) = 2y$ and $z(x) = 3x$ then we have that $f(2y, 3x) = f(w(y), z(x))$. Applying the Chain Rule and we have that:
(9)
\begin{align} \quad \frac{\partial}{\partial x} f(2y, 3x) = \frac{\partial}{\partial x} f(w, z) = \frac{\partial f}{\partial w} \frac{\partial w}{\partial x} + \frac{\partial f}{\partial z} \frac{\partial z}{\partial x} =0 f_1(w, z) + 3f_2(w, z) = 3f_2(w, z) = 3f_2(2y, 3x) \end{align} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 593.2165001687272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320438.53/warc/CC-MAIN-20170625050430-20170625070430-00614.warc.gz"} |
https://www.proofwiki.org/wiki/Definition:Class_of_All_Cardinals | # Definition:Class of All Cardinals
## Definition
The class of all cardinals is the class consisting of all cardinals:
$\NN = \set {x \in \On: \exists y: x = \size y}$
where $\size y$ denotes the cardinal corresponding to the set $y$.
## Also see
• Results about the class of all cardinals can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9446742534637451, "perplexity": 1010.0300120204922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00360.warc.gz"} |
https://www.physicsforums.com/threads/jacobi-symbol.165338/ | # Jacobi Symbol
1. Apr 12, 2007
### nick727kcin
Jacobi Symbol - Binary
NOTE: if its past 8:30 AM Eastern Time, dont worry about it. thanks for the consideration
The Question:
Exercise 13.1. Develop a “binary” Jacobi symbol algorithm, that is, one
that uses only addition, subtractions, and “shift” operations, analogous to
the binary gcd algorithm in Exercise 4.1.
heres the algorithm that i came about with so far:
Note: it is right, except for the fact that there cant be mod, division, multiplication, etc.
t := 1;
while (a > O and a < O) do
.........while (a mod 2 = O) do
................a = a/2;
................if (n mod 8 = 3) or (n mod 8 = 5) then t := -t;
.........if (a < n) then
................interchange(a,n);
................if (a mod 4 = 3) and (n mod 4 = 3) then t = -t;
.........a := (a-n)12;
.........if (n mod 8 = 3) or (n mod 8 = 5) then t = -t;
if (n = I) then return(t) else return(O).
Last edited: Apr 12, 2007
Can you offer guidance or do you also need help?
Draft saved Draft deleted
Similar Discussions: Jacobi Symbol | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942553400993347, "perplexity": 3705.6327543980883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00484.warc.gz"} |
https://www.reakkt.com/2014/06/let-wind-blow-or-forecasting-wind-power.html | ## Wednesday, June 11, 2014
### Let the wind blow or forecasting the wind power
I have mentioned recently, that the production of renewable energy sources is difficult to predict. Let's examine some of the challenges.
The difference between production and forecast can fluctuate wildly, sometimes exceeding +/-50% of the installed wind farm capacity.
Chart: differences between production and P50 forecast
(P50 forecast is the average expected energy production)
The distributions of differences between production and forecast have high kurtosis, and both tails are pretty long.
Chart: Distribution of differences between production and P50 forecast
(P50 forecast is the average expected energy production)
It seems, assumption of the normal distribution of the production, customarily used in wind power forecasting, may not always be correct.
Chart: Assumed normal distribution of (long term) power production
(P50 forecast is the average expected energy production)
It shouldn't surprise then that production often exceeds both conservative P50 and aggressive P90 forecast levels.
Chart: Production (gray bars) vs. P05-P90 forecast band (red and green dotted lines)
On the other hand, production significantly lower than forecast is also not welcome. Since forecasts are the basis of the declared planned production levels on the Day-Ahead Market (DAM), all deviations require costly balancing.
The forecast spread, or the width of the forecast (difference between P90 and P05), which should reflect the forecast probability does not always add any value.
Also, it doesn't help that in addition to the weather-related production forecast, one needs to deal with planned availability of a plant. The planned availability is connected among others with scheduled maintenance, but not everything always goes according to the plan.
As a result, a plant operator needs to deal with additional random variable. It sometimes happens that the plant stops producing long before the planned availability goes to zero, and starts producing, when the planned availability still equals zero.
Chart: Production (black line) vs. planned availability (grey area)
Weather is a factor that affects both production forecast and availability. While wind may be strong, suggesting high production, temperature may lead to operational problems that may not be fully reflected in planned availability. As a result, one may experience noticeable periodic disparity between forecast and production.
Chart: Average monthly difference between production and P50 forecast for different farms
Even with operational information, less than one year of data is too little to decide whether we face here a seasonal effect which should be adjusted for in the next years, or one time event.
To get better alignment between forecast and production the following directions seems promising:
• use energy storage
• develop better weather forecasting models
• include farm operational data it into the production forecast models
• utilize prices, and price forecasts, where adequate
For more about wind forecasting, see: https://pinboard.in/u:mjaniec/t:res/t:wind_forecasting/
Unknown said...
Hii you are providing good information.Thanks for sharing AND Data Scientist Course in Hyderabad, Data Analytics Courses, Data Science Courses, Business Analytics Training ISB HYD Trained Faculty with 10 yrs of Exp See below link
vignes waran said...
Thank for the useful information.I would be like that our useful post.
Data Science Online Training
vignesjose said...
This is an awesome post.Really very informative and creative contents. This concept is a good way to enhance the knowledge.
Selenium Training in Chennai
Selenium Training in Sholinganallur
Tanika Co Valda said...
aarthi said...
Pictorial explanation is really helpful for understanding the concept easily.
Java training in Chennai
Java training in Bangalore | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123900532722473, "perplexity": 4443.48009705183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00315.warc.gz"} |
http://math.stackexchange.com/questions/575266/how-to-check-if-the-series-sum-n-0-infty-sqrtn1-sqrtn-is-converge/575288 | # How to check if the series $\sum_{n=0}^{\infty} \sqrt{n+1}-\sqrt{n}$ is convergent
or divergent??
I tried few tests, but I didn't success to discover if the series is convergent or is divergent...
$$\sum_{n=0}^{\infty} \sqrt{n+1}-\sqrt{n}$$
Thank you!
-
This problem is not well-defined. Take $n=0$. Are we allowing complex-valued summands? – user28877 Nov 20 '13 at 22:31
This is just a telescoping series. So what do you get as a result? Then just turn this into rigour. – Chris K Nov 20 '13 at 22:34
@YoavFridman, when you take the sum from $0$ to $N$, you get $\sqrt{N} - \sqrt{0} = \sqrt{N}$. Now we can always take $k$ sufficiently large such that $|\sqrt{N} - \sqrt{k}|>\varepsilon$ where we let $\varepsilon>0$. So, if you treat the series like a sequence, it is not Cauchy. So, by contrapositive, the series can't be convergent. – Chris K Nov 20 '13 at 22:41
It's telescoping. I don't see why anyone would approach the problem in a more complicated way. – Doc Nov 20 '13 at 22:49
A correction to my earlier comment... should be $\sqrt{N+1}$ and not $\sqrt{N}$ but the idea stays the same. – Chris K Nov 20 '13 at 22:53
Let $S_n$ be the sequence of partial sums: $$S_n = \sum_{k=0}^n \sqrt{k+1} - \sqrt{k}$$ It is easy to see that $S_0 = 1$ and by telescoping $$S_n = \sqrt{n+1}$$ Since convergence of a series is defined through convergence of the partial sums and since $S_n$ obviously diverges, the series diverges as well.
$$a_n=\sqrt{n+1}-\sqrt{n}=\frac{1}{ \sqrt{n+1}+\sqrt{n}}\sim_{n\to \infty} \frac{1}{2\sqrt{n}}=b_n$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728900194168091, "perplexity": 256.9488125674888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672328.14/warc/CC-MAIN-20151001215752-00036-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://www.skyandtelescope.com/astronomy-news/globular-clusters-faint-galaxies/ | # Globular Clusters for Faint Galaxies
The origin of ultra-diffuse galaxies (UDGs) has posed a long-standing mystery for astronomers. New observations of several of these faint giants with the Hubble Space Telescope are now lending support to one theory.
This Hubble image of Dragonfly 44, an extremely faint galaxy, reveals that it is surrounded by dozens of compact objects that are likely globular clusters.
van Dokkum et al. 2017
### Faint-Galaxy Mystery
Hubble images of Dragonfly 44 (top) and DFX1 (bottom). The right panels show the data with greater contrast and extended objects masked.
van Dokkum et al. 2017
UDGs — large, extremely faint spheroidal objects — were first discovered in the Virgo galaxy cluster roughly three decades ago. Modern telescope capabilities have resulted in many more discoveries of similar faint galaxies in recent years, suggesting that they are a much more common phenomenon than we originally thought.
Despite the many observations, UDGs still pose a number of unanswered questions. Chief among them: what are UDGs? Why are these objects the size of normal galaxies, yet so dim? There are two primary models that explain UDGs:
1. UDGs were originally small galaxies, hence their low luminosity. Tidal interactions then puffed them up to the large size we observe today.
2. UDGs are effectively “failed” galaxies. They formed the same way as normal galaxies of their large size, but something truncated their star formation early, preventing them from gaining the brightness that we would expect for galaxies of their size.
Now a team of scientists led by Pieter van Dokkum (Yale University) has made some intriguing observations with Hubble that lend weight to one of these models.
### Globulars Galore
Globulars observed in 16 Coma-cluster UDGs by Hubble. The top right panel shows the galaxy identifications. The top left panel shows the derived number of globular clusters in each galaxy.
van Dokkum et al. 2017
Van Dokkum and collaborators imaged two UDGs with Hubble: Dragonfly 44 and DFX1, both located in the Coma galaxy cluster. These faint galaxies are both smooth and elongated, with no obvious irregular features, spiral arms, star-forming regions, or other indications of tidal interactions.
The most striking feature of these galaxies, however, is that they are surrounded by a large number of compact objects that appear to be globular clusters. From the observations, Van Dokkum and collaborators estimate that Dragonfly 44 and DFX1 have approximately 74 and 62 globulars, respectively — significantly more than the low numbers expected for galaxies of this luminosity.
Armed with this knowledge, the authors went back and looked at archival observations of 14 other UDGs also located in the Coma cluster. They found that these smaller and fainter galaxies don’t host quite as many globular clusters as Dragonfly 44 and DFX1, but more than half also show significant overdensities of globulars.
### Evidence of Failure
Main panel: relation between the number of globular clusters and total absolute magnitude for Coma UDGs (solid symbols) compared to normal galaxies (open symbols). Top panel: relation between effective radius and absolute magnitude. The UDGs are significantly larger and have more globular clusters than normal galaxies of the same luminosity.
van Dokkum et al. 2017
In general, UDGs appear to have more globular clusters than other galaxies of the same total luminosity, by a factor of nearly 7. These results are consistent with the scenario in which UDGs are failed galaxies: they likely have the halo mass to have formed a large number of globular clusters, but they were quenched before they formed a disk and bulge. Because star formation never got going in UDGs, they are now much dimmer than other galaxies of the same size.
The authors suggest that the next step is to obtain dynamical measurements of the UDGs to determine whether these faint galaxies really do have the halo mass suggested by their large numbers of globulars. Future observations will continue to help us pin down the origin of these dim giants.
### Citation
Pieter van Dokkum et al 2017 ApJL 844 L11. doi:10.3847/2041-8213/aa7ca2 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008069038391113, "perplexity": 2888.1853475290386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00310.warc.gz"} |
https://ashpublications.org/blood/article/12/2/189/27766/Reed-Sternberg-Cells-in-the-Peripheral-Blood | Abstract
A case of Hodgkin’s disease is presented, in which Reed-Sternberg cells were observed in the peripheral blood.
This content is only available as a PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305282831192017, "perplexity": 3945.884404147745}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00332.warc.gz"} |
http://math.stackexchange.com/questions/427200/method-for-solving-ode-with-power-series | # Method for solving ODE with power series
when trying to solve second order linear homogeneous variable coefficient ODEs using a power series method, there seem to be two different general forms cropping up in my notes. The first uses an ordinary point $$x_0$$ $$y = \sum_{m = 0}^{\infty}a_m(x-x_0)^m$$ The second uses a factor of $$x^r$$ $$y = x^r\sum_{m = 0}^{\infty}a_mx^m$$
How do I know which of these forms to use for my solution?
-
The second is always more general, i.e. if $r=0$ then you'll just recover the first ansatz. – Pedro Tamaroff Jun 22 '13 at 22:48
but if I have limited time (like in an exam) how can I tell if I can use the first one because that would save time? – Sam Jun 22 '13 at 23:15
The second solution can also be "centered" at $x_o$ by replacing $x$ with $x-x_o$ in the second formula. However, the convergence will only occur to the left or right of the singular point in considering the real interval of convergence. – James S. Cook Jun 22 '13 at 23:36
Your first solution is a power series. Your second solution is the Frobenius solution which allows for $r$ non-integer. There are 4 cases to consider for the second solution. The way in which the second solution is found differs slightly if:
1. you have distinct exponents which do not differ by an integer
2. you have distinct exponents which do differ by an integer
3. you have repeated exponents
4. you have complex exponents
The simplest example illustrating these is the Cauchy-Euler problem $ax^2y''+bxy'+cy=0$ which is the quintessential example of a second order ODE with a regular singular point.
To answer your question, if you seek a solution at an ordinary point use the power series solution. If you face a regular singular point then invoke the method of Frobenius.
-
I know there is a different method for solving Euler-Cauchy equations, but if I didn't recognise a problem as being an Euler-Cauchy equation, could I use either the power series or frobenius method and still come up with the correct answer? – Sam Jun 23 '13 at 10:04
@Sam You look at $ar(r-1)+br+c=0$ for the Cauchy-Euler problem I mentioned in my post. This is the indicial equation from the Frobenius method. The wonderful thing about the Cauchy-Euler problem is that the rest of the Frobenius series vanishes. You just get the lowest order terms for the Frobenius solution. Sort of like solving $y^{n}(x)=0$ with a power series ansatz, it's a bit of an overkill since clearly an $n$-th order polynomial will do nicely. (well, just integrate $n$-fold times for that silly example) – James S. Cook Jun 23 '13 at 17:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8337176442146301, "perplexity": 219.87156184385458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830746.39/warc/CC-MAIN-20140820021350-00050-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/219101-drawing-vertical-line-through-circle-find-what-x-does-vertical-line-div.html | # Thread: Drawing vertical line through a circle to find for what x does that vertical line div
1. ## Drawing vertical line through a circle to find for what x does that vertical line div
Hi all, I would appreciate some help with the problem:
The problem state: "A building design consultant is going to make a 14 foot diameter windowwith a vertical cut through the glass so that 2/3 of the glass can be tinted. Wheres should this cut be made?"
As you can see in the above attachment, I was able to conclude into some sort of equation, but when I solve that equation to 49*pi/3 via wolfmath, I got about 6.83 = x, which does not make sense because the radius is 7 and that wouldn't be 1/3 if you draw the vertical line through it to divide the circle.
If you could help me out, I would really appreciate it.
Thank you,
2. ## Re: Drawing vertical line through a circle to find for what x does that vertical line
First note that the area of this circle would be \displaystyle \begin{align*} \pi \cdot 7^2 = 49\,\pi \textrm{ units}\, ^2 \end{align*}
If we place the circle on a set of axes centred at the origin, it has equation \displaystyle \begin{align*} x^2 + y^2 = 7^2 \end{align*} then draw in the vertical line \displaystyle \begin{align*} x = a \end{align*} somewhere to the right of the y-axis, we wish to know what value of x this is so that it splits the circle into areas in the ratio 2:1. So that means the area bounded between this line and the circle on the right will be \displaystyle \begin{align*} \frac{49 \, \pi}{3} \,\textrm{units}\,^2 \end{align*}, and since this is evenly distributed above and below the x-axis, the area above the x-axis is \displaystyle \begin{align*} \frac{49 \, \pi }{6} \, \textrm{units} \, ^2 \end{align*}.
So we can set up an integral for the area bounded by the circle, the x-axis and the vertical line \displaystyle \begin{align*} x = a \end{align*} as \displaystyle \begin{align*} A = \int_a^7{ \sqrt{ 49 - x^2 } \, dx } \end{align*}.
Equating this to the known area we have \displaystyle \begin{align*} \int_a^7{ \sqrt{ 49 - x^2 } \, dx } &= \frac{ 49 \, \pi }{ 6} \end{align*}.
In order to evaluate this integral and solve for the value of a, we need to make a trigonometric substitution \displaystyle \begin{align*} x = 7\sin{(\theta)} \implies dx = 7\cos{(\theta)}\,d\theta \end{align*} and note that when \displaystyle \begin{align*} x = a , \theta = \arcsin{ \left( \frac{a}{7} \right) } \end{align*} and when \displaystyle \begin{align*} x = 7 , \theta = \frac{\pi}{2} \end{align*}. Substituting gives
\displaystyle \begin{align*} \int_{\arcsin{ \left( \frac{a}{7} \right) } } ^{ \frac{\pi}{2} } { \sqrt{ 49 - \left[ 7\sin{(\theta)} \right] ^2 } \cdot 7\cos{(\theta)}\,d\theta } &= \frac{49\,\pi}{6} \\ 7 \int_{\arcsin{ \left( \frac{a}{7} \right) } }^{\frac{\pi}{2}} { \sqrt{ 49 \left[ 1 - \sin^2{ ( \theta ) } \right] } \cos{ ( \theta) } \, d\theta } &= \frac{49 \, \pi }{6} \\ 7 \int_{ \arcsin{ \left( \frac{a}{7} \right) } }^{ \frac{ \pi }{ 2 } }{ 7\cos^2{(\theta)}\,d\theta } &= \frac{ 49 \, \pi}{6} \\ \frac{ 49 }{2} \int_{ \arcsin{ \left( \frac{a}{7} \right) } }^{ \frac{ \pi }{2}}{ 1 + \cos{ (2\theta)} \,d\theta } &= \frac{ 49 \, \pi }{6} \\ \int_{\arcsin{\left( \frac{a}{7} \right) }}^{\frac{\pi}{2}}{1 + \cos{(2\theta)}\,d\theta} &= \frac{\pi}{3} \\ \left[ \theta + \frac{1}{2}\sin{(2\theta)} \right] _{\arcsin{\left( \frac{a}{7} \right) } }^{\frac{\pi}{2}} &= \frac{\pi}{3} \end{align*}
\displaystyle \begin{align*} \left[ \frac{\pi}{2} + \frac{1}{2} \sin{ \left( 2 \cdot \frac{\pi}{2} \right) } \right] - \left\{ \arcsin{ \left( \frac{a}{7} \right) } + \frac{1}{2} \sin{ \left[ 2\arcsin{ \left( \frac{a}{7} \right) } \right] } \right\} &= \frac{\pi}{3} \\ \frac{\pi}{2} - \arcsin{ \left( \frac{a}{7} \right) } - \sin{ \left[ \arcsin{ \left( \frac{a}{7} \right) } \right] } \,\sqrt{ 1 - \sin^2{ \left[ \arcsin{ \left( \frac{a}{7} \right) } \right] } } &= \frac{\pi}{3} \\ -\arcsin{ \left( \frac{a}{7} \right) } - \frac{a}{7} \, \sqrt{ 1 - \left( \frac{a}{7} \right) ^2 } &= \frac{\pi}{3} - \frac{\pi}{2} \\ -\arcsin{ \left( \frac{a}{7} \right) } - \frac{a}{7} \, \sqrt{ \frac{49 - a^2}{49} } &= -\frac{\pi}{6} \\ \arcsin{ \left( \frac{a}{7} \right) } + \frac{ a \, \sqrt{ 49 - a^2 }}{49} &= \frac{\pi}{6} \end{align*}
Unfortunately there is no way to evaluate the value of a here exactly, but you should be able to get a numerical answer using a CAS
3. ## Re: Drawing vertical line through a circle to find for what x does that vertical line
Brilliant! Thank you so much.
4. ## Re: Drawing vertical line through a circle to find for what x does that vertical line
If the ends of a chord drawn in a circle radius $r$ subtend an angle $\theta$ at the centre of the circle, then the area 'cut off' by the chord is $r^{2}(\theta - \sin \theta)/2.$
That means we need $r^{2}(\theta - \sin \theta)/2=\pi r^{2}/3,$ or, $\theta - \sin \theta=2\pi /3.$
That requires a numerical solution, which turns out to be $\theta = 2.60533.$
For a circle of radius $7,$ it follows that the centre of the chord should be at a distance of $1.8545$ (approx) from the centre of the circle.
5. ## Re: Drawing vertical line through a circle to find for what x does that vertical line
Very well done in fact i was stuck at the same stage. i used the formula ∫▒√(a^2- x^2 ) dx= x/2 √(a^2- x^2 )+ 1/2 sin^(-1)〖x/a〗+c directly the issue remains how to get a ??
6. ## Re: Drawing vertical line through a circle to find for what x does that vertical line
You need to use a numerical method and/or technology such as a CAS. There is no way to isolate a exactly when it's both inside and outside of a transcendental function (unless it's a very special case where a solution sticks out at you). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999996542930603, "perplexity": 859.8044263614155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425254.88/warc/CC-MAIN-20170725142515-20170725162515-00005.warc.gz"} |
https://quant.stackexchange.com/questions/50842/risk-neutral-price-of-h-ex-t1x-t3 | # Risk-neutral price of $H=e^{X_T^1+X_T^3}$
Let $$B=(B_t^1,B_t^2,B_t^3)$$ a $$\mathbb R^3$$-valued Brownian motion. Let $$r_t$$ (risk free rate) be bounded and deterministic. Let consider the DISCOUNTED market $$d\overline X_t^1=\frac52dt+2dB_t^1-dB_t^2-dB_t^3$$ $$d\overline X_t^2=7dt+2dB_t^1+2dB_t^2-10dB_t^3$$ $$d\overline X_t^3=\frac72dt+4dB_t^1-3dB_t^2+dB_t^3$$ I have already found that the market is arbitrage free.
I would like to find the risk-neutral price of the following claim:$$H=e^{X_T^1+ X_T^3}$$ (note: here the $$X_T^1,X_T^3$$ are not discounted) but i'm stack. Any help please?
• Are there 3 different $X_t$ processes? In your notation it seems the same process. – Daneel Olivaw Jan 26 at 0:30
• Also what are the $B$s at the end of each equation? – Daneel Olivaw Jan 26 at 0:32
• @DaneelOlivaw yes sorry, now I edit it – Buddy_ Jan 26 at 8:02
• @DaneelOlivaw don't you have any suggestion? – Buddy_ Jan 26 at 15:57
• why not to use montecarlo to compute the price?? – Valometrics.com Jan 26 at 20:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164687395095825, "perplexity": 1276.0563063302834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00598.warc.gz"} |
http://cms.math.ca/cjm/kw/polynomial%20return%20times | location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword polynomial return times
Expand all Collapse all Results 1 - 1 of 1
1. CJM 2012 (vol 65 pp. 171)
Lyall, Neil; Magyar, Ákos
Optimal Polynomial Recurrence Let $P\in\mathbb Z[n]$ with $P(0)=0$ and $\varepsilon\gt 0$. We show, using Fourier analytic techniques, that if $N\geq \exp\exp(C\varepsilon^{-1}\log\varepsilon^{-1})$ and $A\subseteq\{1,\dots,N\}$, then there must exist $n\in\mathbb N$ such that $\frac{|A\cap (A+P(n))|}{N}\gt \left(\frac{|A|}{N}\right)^2-\varepsilon.$ In addition to this we also show, using the same Fourier analytic methods, that if $A\subseteq\mathbb N$, then the set of $\varepsilon$-optimal return times $R(A,P,\varepsilon)=\left\{n\in \mathbb N \,:\,\delta(A\cap(A+P(n)))\gt \delta(A)^2-\varepsilon\right\}$ is syndetic for every $\varepsilon\gt 0$. Moreover, we show that $R(A,P,\varepsilon)$ is dense in every sufficiently long interval, in particular we show that there exists an $L=L(\varepsilon,P,A)$ such that $\left|R(A,P,\varepsilon)\cap I\right| \geq c(\varepsilon,P)|I|$ for all intervals $I$ of natural numbers with $|I|\geq L$ and $c(\varepsilon,P)=\exp\exp(-C\,\varepsilon^{-1}\log\varepsilon^{-1})$. Keywords:Sarkozy, syndetic, polynomial return timesCategory:11B30
top of page | contact us | privacy | site map | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314067125320435, "perplexity": 705.17773629847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00147-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://link.springer.com/article/10.1007%2Fs10539-004-7044-0 | , Volume 20, Issue 4, pp 697-713
Parsimony and the Fisher–Wright debate
Purchase on Springer.com
\$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Abstract
In the past five years, there have been a series of papers in the journal Evolution debating the relative significance of two theories of evolution, a neo-Fisherian and a neo-Wrightian theory, where the neo-Fisherians make explicit appeal to parsimony. My aim in this paper is to determine how we can make sense of such an appeal. One interpretation of parsimony takes it that a theory that contains fewer entities or processes, (however we demarcate these) is more parsimonious. On the account that I defend here, parsimony is a ‘local’ virtue. Scientists’ appeals to parsimony are not necessarily an appeal to a theory’s simplicity in the sense of it’s positing fewer mechanisms. Rather, parsimony may be proxy for greater probability or likelihood. I argue that the neo-Fisherians appeal is best understood on this interpretation. And indeed, if we interpret parsimony as either prior probability or likelihood, then we can make better sense of Coyne et al. argument that Wright’s three phase process operates relatively infrequently. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052988052368164, "perplexity": 2470.7519775684773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/7142-definition-limit.html | # Thread: Definition of Limit
1. ## Definition of Limit
There's a similar post on this, but I can't understand it and it seems flawed. So I'll try this one.
How do you prove that the lim (as x approaches a) of x^2=a^2?
2. Are you familiar with the delta-epsilon defintion of a limit? I don't want to go into it if it's not what you're looking for.
3. Yes I am.
4. Here's a good site that proves $\lim_{x \to 1}x^2+3=4$ It uses the same method so unless you want I won't type it all out.
http://www.math.ucdavis.edu/~kouba/C...l#SOLUTION%204
5. Thank you. Very helpful.
6. Ok. So the only thing I don't get now as how it works for something like the example I had where the limit equals a variable (also the number the limit is approaching is a variable). I'm having trouble making the connection from the problem I had to ones with real numbers.
How do you show abs(x^2-a^2) is less than epsilon? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597362637519836, "perplexity": 372.036339835185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105304.35/warc/CC-MAIN-20170819051034-20170819071034-00435.warc.gz"} |
https://www.physicsforums.com/search/849561/ | # Search results
1. ### Irrational Numbers
H - Many thanks for a really helpful post! Maybe what I'm exploring is to do with the boundary between mathematical formalism and realism. I'm now a little more clear about one or two things and I'll shut up about this now. Btw - re the primes - I'll stop bothering you about this also. I've...
2. ### Irrational Numbers
Hurkyl? Anybody? I'm getting a little bit paranoid at the lack of response. Was that not an appropriate question here? I suppose it's not exactly a mathematical question. Or is it? It wasn't a trap anyway. I was trying to understand how mathematicians see these issues, exactly where they feel...
3. ### Irrational Numbers
Thanks. (I was careful to add 'in the everyday sense' when I used these words.) That's how I imagine points are usually defined, as the end point of a never ending process. I assume that they simply have to be defined in this sort of way. It's the issues this raises that interest me. Very...
4. ### Irrational Numbers
Hurkyl - I think I can accept what you say (and I do) without it altering my general point, which comes exactly from trying to understand what parts of the object really do correspond to the math and what parts are simply errors of approximation. For most objects there may be no problem being...
5. ### Irrational Numbers
Oh yes. I see. Thanks. In that case it's a good point. But I wouldn't agree.
6. ### Irrational Numbers
Differently to what? Damn silly question is the answer most people would give.
7. ### Irrational Numbers
Yes. Is it not a surreal fantasy about infinitely thin knives that produces a useable definition for infinitely 'thin' numbers? Whether such numbers exist (or whether numbers can be coherently defined in this way), I was suggesting, can be determined from examining the definition. I suppose...
8. ### Irrational Numbers
For me the question is not whether these numbers exist but what they actually are. Whether they exist would seem to depend on how we define them.
9. ### Is it possible to simplify the RH problem?
Okay CRG, I've decided to book some tuition in order to get to grips with the issues and will stop bothering you. I need to take a few steps back before trying to go forward again. Many thanks for your patience. Much appreciated. Regards Pete
10. ### Is it possible to simplify the RH problem?
Thanks - even if it's all hieroglyphics to me. I realise it's a struggle to talk about this with a mathematical duffer. I was wondering whether proving the zeros behave in a certain way is equivalent to proving that the relevant inputs have certain properties. But even if this question is...
11. ### Is it possible to simplify the RH problem?
Yes. Simplifying problems is a hobby. It works for the TPC, Russell's paradox and many other problems, (and it kept my business alive through many a crisis). I was wondering if it would work for RH. Seems highly unlikely at this point. CRG - For you the point about inputs and outputs may not...
12. ### Is it possible to simplify the RH problem?
I like to think I could understand a lot of the maths, yes, given time, but I know I could never understand all that would be required for this problem. I'm in complete awe of anyone who can understand it. I suppose I was asking if the zeta function is a map between inputs and outputs, such...
13. ### Is it possible to simplify the RH problem?
Thanks. I realise Goedel diagonalization is standard stuff. Couldn't understand the equations, which are also probably standard stuff. The last point seems slightly off-track since I don't want to prove that zeta has certain properties. My thought was that zeta merely reveals properties...
14. ### Irrational Numbers
Forgetting the OP then, I'd like to ask something about this. I see that a region may be well-defined, so that being a region would not necessarily entail that a number is ill-defined. (Is this what you meant?) But... couldn't we say it is ill-defined when we forget that it's is a region and...
15. ### Irrational Numbers
I can't follow most of this, but is there not a sense in which all numbers are ill-defined in the sense that they represent a region on the number line that can never be reduced to a point? In this way could the OP's question be something to do with the relationship between a continuum and a...
16. ### Is it possible to simplify the RH problem?
Hello everybody. It's my first post and I'm not a mathematician so please bear with me. I'll try to make it vaguely interesting. I'm fascinated by the problem of deciding the Riemann Hypothesis. The trouble is, I'm not clever enough to understand it. The zeta function may as well be martian... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543941974639893, "perplexity": 631.4785186982219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00367.warc.gz"} |
http://scitation.aip.org/content/aip/journal/jcp/138/5/10.1063/1.4775807 | • journal/journal.article
• aip/jcp
• /content/aip/journal/jcp/138/5/10.1063/1.4775807
• jcp.aip.org
1887
No data available.
No metrics data to plot.
The attempt to plot a graph for these metrics has failed.
Reaction coordinates, one-dimensional Smoluchowski equations, and a test for dynamical self-consistency
USD
10.1063/1.4775807
View Affiliations Hide Affiliations
Affiliations:
1 Department of Chemical Engineering, University of California, Santa Barbara, California 93106, USA
2 Department of Chemistry and Biochemistry, University of California, Santa Barbara, California 93106, USA
3 van ‘t Hoff Institute for Molecular Sciences, University of Amsterdam, PO Box 94157, 1090 GD Amsterdam, The Netherlands
J. Chem. Phys. 138, 054106 (2013)
/content/aip/journal/jcp/138/5/10.1063/1.4775807
http://aip.metastore.ingenta.com/content/aip/journal/jcp/138/5/10.1063/1.4775807
## Figures
FIG. 1.
Schematic depicting a projection of short trajectory swarm data (gray) from an initial point x 0 (black) to specific coordinates q 1 and q 2 at later times.
FIG. 2.
For a reaction coordinate whose dynamics follow a one-dimensional Smoluchowski equation, swarms of trajectories from different individual configurations on each isosurface will drift and diffuse similarly. Therefore, for each isosurface, the projection of the individual swarms should each resemble the combined projection of all swarms (depicted to the right of the free energy surface). In the case depicted above, the individual swarms behave differently from each other, and therefore differently from their combined projection.
FIG. 3.
If the dynamics of q follow a one-dimensional Smoluchowski equation, dynamical self-consistency should apply at all isosurfaces of q. (Left) For a poor reaction coordinate q(x) the value of q alone is not a good predictor of drift or diffusion from the configuration x. (Right) If q shows dynamical self-consistency for all isosurfaces of q from reactant A to product B, then q is an accurate reaction coordinate.
FIG. 4.
The solid curves are contours of the actual free energy landscape βF(q 1, q 2) with a saddle point at the round dot. The coordinate q 1 has been used as the initial coordinate, i.e., λ(x) = q 1(x). The dotted contours are curves of constant Λ. Note that Λ is peaked where the original landscape had a saddle point.
FIG. 5.
The free energy landscape for a model of nucleation where the nucleus size can change along either fast (n F ) and slow (n S ) mobility directions.
FIG. 6.
The free energy as a function of the initial coordinate λ = n F + n S .
FIG. 7.
Histograms of endpoints of 1000 trajectories 2.0τ after initiation. Each figure shows nine swarms initiated at nine different points on the free energy landscape. In (a) the anisotropy is s = 1.0, and in (b) the anisotropy is s = 0.1. Differences in the way the swarms drift are difficult to visually discern, but the dynamical self-consistency test can detect differences.
FIG. 8.
The trial reaction coordinate q can be represented by the angle θ between the direction of progress along q and the n F -axis.
FIG. 9.
exp[−Λ] weighted Kullback-Leibler divergences for different trial reaction coordinates (represented by θ) and for different values of the diffusion anisotropy, s = 1.0, 0.3, 0.1, 0.03.
FIG. 10.
Illustrating three mechanistic regimes that prevail for different degrees of diffusion anisotropy. When s ≈ 1, the diffusion tensor is isotropic and most pathways follow the minimum free energy path. When s < 1, but not extremely small, a two step nucleation mechanism prevails with initial motion along the slow coordinate before escape in the n F -direction. Finally, when s is extremely small the Berezhkovskii-Zitserman (BZ) regime prevails. In the BZ-regime, trajectories can escape in the n F -direction with the n S degree of freedom frozen.
FIG. 11.
Narrow tube type free energy landscape as a function of fast (n F ) and slow (n S ) coordinates. The saddle point is at (n F , n S ) = (20, 20).
FIG. 12.
Projections of the free energy onto different initial coordinates. (a) The initial coordinate was λ = n S . (b) The initial coordinate was λ = n F .
FIG. 13.
⟨ΔKL[q]⟩Λ for different coordinates (represented by the angle θ as shown in Figure ). (a) The initial coordinate was λ = n S . (b) The initial coordinate was λ = n F . In both cases the optimal coordinate (the minimum) rotates toward n S as the diffusion tensor becomes more anisotropic.
FIG. 14.
Contour plots of (a) the EVB potential V(x, y) with the minimum energy path between the reactant and product minima, (b) the potential Λ(x, y) constructed with λ chosen as the end-to-end direction along the minimum energy path, and (c) the potential Λ(x, y) constructed with the ideal energy gap coordinate for λ. Contour spacings are 5kBT in all plots.
FIG. 15.
⟨ΔKL[q]⟩Λ for different linear trial coordinates q(x, y) represented by the angle θ. The initial coordinate λ(x, y) gives a hysteretic free energy Fλ(λ) if sampled imperfectly. Coordinates similar to λ clearly do not have dynamical self-consistency, but other coordinates are not clearly distinguished in ⟨ΔKL[q]⟩Λ.
FIG. 16.
⟨ΔKL[q]⟩Λ for different linear trial coordinates q(x, y) represented by the angle θ. The initial coordinate λ(x, y) is the vertical energy gap between diabatic states of the EVB model.
## Tables
Table I.
Comparison between reaction coordinates from the dynamical self-consistency test and from KLBS theory. Coordinates identified by dynamical self-consistency are summarized for two different initial coordinates. For narrow tube potential energy landscapes the dynamical self-consistency test can correctly identify accurate reaction coordinates even for an inaccurate initial coordinate λ.
/content/aip/journal/jcp/138/5/10.1063/1.4775807
2013-02-01
2014-04-23
Article
content/aip/journal/jcp
Journal
5
3
### Most cited this month
More Less
This is a required field | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728695511817932, "perplexity": 2791.8525311090348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/3755/math-input-panel-in-windows-7-and-latex-writing | # Math Input Panel in Windows 7 and LaTeX writing
I just found out about the Math Input Panel in Windows 7,and now there is commercial software using it for LaTex writing. For example Inlage. If you have a look of their first demo video, you know what I am talking about: http://www.inlage.com/videos
I had a try with the Math Input Panel myself, and it seems you can basically write all math stuff, integration, super(sub) scripts, tensor, arrows (even with labels over them!), and matrix! The only big problem is there is NO commutative diagram. And it probably has trouble recognizing some math fonts, like \mathfrak or \mathcal. But in all, it really recognize handwritings pretty well. I don't have a writing pad, so I just tried writing with a mouse.
I know there are people taking math notes using a tablet PC. Won't this tool drastically improve the quality of our note-taking? It basically changes all handwriting into TeX files!
Maybe I am too late on this, are there more mature product for such purposes? I think handwriting math could actually be slower than typing LaTeX codes. But one good reason for doing handwriting is because sometimes I just don't like to make math writing into code writing (or something like programming). I would like to hear about your comments. Thank you!
• the main question, is to see if anyone here has such kind of experiences, namely, handwriting TeXing... – bas Oct 4 '10 at 6:56
• The math panel is really cool, like all of Microsoft's math stuff. However it is still quite new, and I don't know anybody that uses it right now. Concerning speed, I think some people are faster with linear input, some are faster with handwriting recognition. See e.g. this post by Murray: blogs.msdn.com/b/murrays/archive/2009/05/07/… – Philipp Oct 4 '10 at 8:32
I'm the developer of Inlage and I think the Windows 7 MIP is a great tool but it can't handle a many special things. But the possibilities are good for doing a lot of math stuff. For testing that tool I tried to write a 2 hour lecture of relativistic quantum mechanics down with a graphic tablet. It was possible but I had some problems. So I just replaced some symbols it couldn't handle. But natural it wasn't faster than handwriting...
But I think another nice idea to use the MIP is if you dont know the latex command of a symbol you can easily write it down (only the symbol, with mouse) in the MIP and get the command.
TexTablet performs the main feature of the program mentioned above: translates handwritten math into LaTeX. It's based on the Windows 7 Math Input. You hand-write the formulas on the Math Input Panel, but the results get translated into LaTeX.
Also, it's free of cost.
• I should add that I really like this product and yet I never use it. I almost always just type the LaTeX. I have never stopped typing out a LaTeX project to write out a particularly complicated formula using this product instead. – Henry B. Oct 26 '10 at 15:30
• So then you don't find it useful, do you? – Hendrik Vogt Oct 27 '10 at 8:17
• @Hendrik. It seems useful in theory, but rarely (for me) in practice. I modified my answer. – Henry B. Oct 27 '10 at 9:38
Writing notes on a tablet during a math class is likely to be a real loser unless you have some disability that makes using a pencil especially difficult compared to using a stylus. I don't know what that disability might be.
Since MIP or other tablet-based math input is error prone, and even a 10% error rate would be low, you will, in the midst of your math class be spending 50% or more of your time and a corresponding percentage of your mental capacity grappling with the error correction mechanism of MIP (or whatever).
As a (now retired) math and computer science professor, my experience with students who typeset their notes (and homework) when not at all required, is that they rarely if ever learn the material better.
If you are not using MIP, but simply using a tablet as a simulation of paper (no recognition) then I suppose there is this tradeoff:
pro: you are maybe saving some trees by not using paper. Also saving graphite? you could send your notes digitally to some archive or some friend with a computer. you could, maybe later, try to scan and recognize the math, without involving scanning or photographing paper.
con: you are writing on a piece of glass, which is uncomfortable. you can't look back at a previous page without invoking some command.
alternative: take a photo of the blackboard in your class. Or if you are already taking a class via computer, take a screen grab and store it.
Sadly, the neat technology of handwriting recognition of mathematics is not nearly as useful as it seems at first. Peculiarly, I have found that speaking mathematics is not as error prone, and has the distinct advantage of not requiring hands. Richard Fateman http://www.cs.berkeley.edu/~fateman
• Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. It may be a personal opinion of mine (and I certainly don't want to start an argument), but maybe the first sentence could be rephrased to appear more “politically correct”... – Pier Paolo Jan 11 '15 at 18:48
I'm not totally sure I understand your question. I've never tried hand writing TeX; that seems odd to me.
I have taken notes in LaTeX before. My general strategy was to invent macros as needed while taking notes. The first time I needed a "set x to be a uniformly random element of set S" macro, I just wrote \rgets. At the end of the class, my notes didn't compile, but I'd just go through and define the macros I had invented during the note taking.
I never tried it with a math class like algebraic geometry (I just used pencil and paper for math classes), but that strategy worked quite well for an advanced crypto class I took in grad school.
This is a really old Q/A, but I thought I'd mention that MathPaper, not to be confused with a commercial product by the same name, is the best I have found at recognizing handwritten mathematics. It has a feature whereby you can, subsequent to recognition (which happens in real-time), copy to the clipboard the equivalent LaTeX code. I use it on a tablet PC. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129804134368896, "perplexity": 1057.1555513955325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00339.warc.gz"} |
http://math.stackexchange.com/questions/198830/proving-that-a-sequence-is-decreasing | # Proving that a sequence is decreasing
How could I prove the following statement without using induction? I've been staring at this for the better part of an hour. (To be fair, I'm not very good at proof writing) Thanks in advance!
Define a sequence $a_n, n \ge 0,$ inductively by $a_0 = 2,$ and for all $n \ge 0, a_{n+1} = \sqrt{a_n + 1}.$
Using the fact that the polynomial $x^2 - x - 1 < 0$ if and only if $\frac{1-\sqrt{5}}{2} < x < \frac{1+\sqrt{5}}{2}$, prove that for every $n \ge 0, a_n > a_{n+1}.$
-
Why don't you want to use induction? Can you see how to do it using induction? – Ben Millwood Sep 19 '12 at 1:07
The hint will actually be used in the induction step. – Brian M. Scott Sep 19 '12 at 1:11
Really? Well, thank you for clarifying that before I set out to solve this! I actually wasn't aware that we had to use it here at all. – user41419 Sep 19 '12 at 1:12
Since you’re dealing with non-negative number, $a_{n+1}>a_n$ if and only if $a_{n+1}^2>a_n^2$. But $a_{n+1}^2=a_n+1$, so $a_{n+1}>a_n$ if and only if $a_n+1>a_n^2$, i.e., if and only if $a_n^2-a_n-1<0$. In other words, your sequence decreases from $a_n$ to $a_{n+1}$, which you don’t want, if and only if $a_n^2-a_n-1<0$. The last line of the problem reminds you of just when this is true. Can you now show that it’s never true?
You will actually be using induction: you’ll be showing that if $a_{n+1}\not> a_n$, then $a_{n+2}\not>a_{n+1}$.
Since $a_n>0$ for all $n$, it is enough to show that $a_n^2>a_{n+1}^2$. But this is equivalent to $a_{n}^2-a_{n+1}^2>0$, and we see that $a_{n}^2-a_{n+1}^2=a_n^2-a_n-1$, so we need only show that $a_n>\frac{1+\sqrt 5}{2}$. This is easy to show using induction, since $a_0>\frac{1+\sqrt 5}{2}$ and if $a_n>\frac{1+\sqrt 5}{2}$ then $$a_{n+1}>\sqrt{\frac{1+\sqrt 5}{2}+1}=\frac{1+\sqrt 5}{2}$$ so by induction it is true for all $n$. Induction is definitely the right technique to use, since your sequence is defined inductively. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497622847557068, "perplexity": 76.40352966996814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645322940.71/warc/CC-MAIN-20150827031522-00092-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/extremely-weird-integration/ | # Extremely Weird Integration.
(y^2 - 1)^2 = y^4 - 2y^2 + 1 . Aren't both equation the same? Why after integration , the answers become different?
Can anyone explain it?
Note by 柯 南
2 years, 7 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
The integration of $$(y^2-1)^2$$ is incorrect. You seem to have used something similar to the chain rule of differentiation, which is not applicable to integration. The correct way to do it would be to expand and continue as you did on the left hand side.
- 2 years, 7 months ago
(I'm a student in malaysia) but the formula is in d text book ? Are there ways to do that without expanding ?
- 2 years, 7 months ago
Could you please post a picture of the formula in your textbook? And no, there's no quicker way. However, if you write the integral as the limit of an infinite summation, you will obtain the correct answer but after a lot of tedious calculation.
- 2 years, 7 months ago
Sorry , not text book , it's reference book. Maybe they wrote it wrong.
Thank you so much :D , i wondered about it for so many days
- 2 years, 7 months ago
- 2 years, 7 months ago
The integration of (y^2 - 1)^2 is incorrect. therefore you got different computation
- 2 years, 7 months ago
You can not divide by 2y, that only works when the derivative of the inside is a constant. In your case it was 2y which is not a constant. Hope that helps
- 2 years, 7 months ago
Thank you. That was a clear explanation !
(btw , why 2y is not constant ? Isit because y^2 can be ± ?
- 2 years, 7 months ago
I check both so many times , can't find anything wrong with it.
- 2 years, 7 months ago
You have performed the integral of $$(y^2-1)^2$$ incorrectly.
- 2 years, 7 months ago
What is the correct steps?
- 2 years, 7 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980560541152954, "perplexity": 2652.1102698141945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00222.warc.gz"} |
https://mathtuition88.com/2014/04/26/fibonacci-numbers-and-the-mysterious-golden-ratio/ | ## What are Fibonacci Numbers?
Fibonacci Numbers, named after Leonardo Fibonacci, is a sequence of numbers:
$F_0=0, F_1=1, F_2=1, F_3=2, F_4=3, F_5=5$,
with a recurrence relation $F_n=F_{n-1}+F_{n-2}$.
Fibonacci
## Relation to Golden Ratio
Fibonacci Numbers are linked to the mysterious Golden Ratio, $\displaystyle \phi=\frac{1+\sqrt{5}}{2}\approx 1.61803$
In fact, the ratio of successive Fibonacci numbers converges to the Golden Ratio! The first person to observe this is Johannes Kepler.
How do we prove it?
Recall the recurrence relation: $F_n=F_{n-1}+F_{n-2}$
Dividing throughout by $F_{n-1}$, we get $\displaystyle \frac{F_n}{F_{n-1}}=1+\frac{F_{n-2}}{F_{n-1}}$
(We will first assume $\displaystyle\lim_{n\to\infty}\frac{F_n}{F_{n-1}}$ exists for the time being, and prove it later)
Taking limits, we get $\displaystyle\lim_{n\to\infty}\frac{F_n}{F_{n-1}}=1+\lim_{n\to\infty}\frac{F_{n-2}}{F_{n-1}}$.
Denoting $\displaystyle\lim_{n\to\infty}\frac{F_n}{F_{n-1}}$ as $\phi$, we get:
$\displaystyle \phi=1+\frac{1}{\phi}$
Multiplying by $\phi$, we get $\phi^2=\phi +1$
$\phi^2-\phi-1=0$
This is a quadratic equation, solving using the quadratic equation, we get:
$\displaystyle \phi=\frac{1\pm\sqrt{1^2-4(1)(-1)}}{2}=\frac{1\pm\sqrt{5}}{2}$
Since $\phi$ is clearly positive, we have $\displaystyle \phi=\frac{1+\sqrt{5}}{2}$ which is the Golden Ratio!
For a complete proof, actually we will need to prove that $\displaystyle\frac{F_n}{F_{n-1}}$ converges. This is a bit tricky and requires some algebra.
Interested readers can refer to the excellent website at: http://pages.pacificcoast.net/~cazelais/222/fib-limit.pdf
for more details.
Interesting video on Fibonacci numbers!
Fibonacci numbers and the Golden Ratio can also be used for trading stocks.
http://mathtuition88.com
This entry was posted in fibonacci, fibonacci trading, number theory and tagged , , , . Bookmark the permalink.
### 3 Responses to Fibonacci Numbers and the Mysterious Golden Ratio
1. Reblogged this on Math Education Concepts and commented:
Fibonacci Numbers… Simply Inspiring!
Like | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587193727493286, "perplexity": 1053.4758772899968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645362.1/warc/CC-MAIN-20180317214032-20180317234032-00779.warc.gz"} |
https://dsp.stackexchange.com/questions/16968/frequency-spectrum-of-a-sinc-function | # Frequency spectrum of a sinc function
I am doing one example from my book as a preparation for exam.
The assignment is:
It is given that: $$\mathbb{rect}(t)=pf(t) \leftrightarrow PF(f)=2AT_0 \cdot \mathbb{sinc}(2\pi fT_0)$$ you need to calculate frequency spectrum of: $$pf(t)=\mathbb{sinc}(\omega t)$$
Truthfully, I have no idea where to start. I presume that the solutions is absolute value of sinc function, because I read it from solution, but in the solution there was only diagram.
I tried to solve directly using Fourier transformation on sinc function, but I got very messy equation at the end.
So, my question is: How can I solve this assignment?
Thanks you very much!!!
EDIT: I found this pdf, on the 7(162) side it explains what I want to do, but this is only with pictures. I want to understand it. http://ultrasound.ee.ntu.edu.tw/classnotes/ckt2/Chapter12.pdf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185488224029541, "perplexity": 306.55927188894526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00426.warc.gz"} |
https://reference.globalspec.com/standard/3879629/iec-60027-3-2002-letter-symbols-to-be-used-in-electrical-technology-part-3-logarithmic-and-related-quantities-and-their-units | IEC 60027-3:2002Letter symbols to be used in electrical technology - Part 3: Logarithmic and related quantities, and their units
published
Organization: IEC - International Electrotechnical Commission Publication Date: 19 July 2002 Status: published Page Count: 25 ICS Code (Character symbols): 01.075 ICS Code (Quantities and units): 01.060
Abstracts
abstract
Applies to logarithmic quantities and units. Quantities that can be expressed as the logarithm of a dimensionless quantity, such as the ratio of two physical quantities of the same kind, can be regarded and treated in different ways. In many cases, differences do not affect practical treatment.
Document History
IEC 60027-3:2002 - Letter symbols to be used in electrical technology - Part 3: Logarithmic and related quantities, and their units
July 19, 2002 - IEC - International Electrotechnical Commission
Applies to logarithmic quantities and units. Quantities that can be expressed as the logarithm of a dimensionless quantity, such as the ratio of two physical quantities of the same kind, can be regarded and treated in different ways. In many cases, differences do not affect practical treatment.
IEC 60027-3:1989/AMD1:2000 - Amendment 1 - Letter symbols to be used in electrical technology - Part 3: Logarithmic quantities and units
March 10, 2000 - IEC - International Electrotechnical Commission
A description is not available for this item.
IEC 60027-3:1989 - Letter symbols to be used in electrical technology - Part 3: Logarithmic quantities and units
December 15, 1989 - IEC - International Electrotechnical Commission
Applies to logarithmic quantities and units. Quantities that can be expressed as the logarithm of a dimensionless quantity, such as the ratio of two physical quantities of the same kind, can be regarded and treated in different ways. In many cases, differences do not affect practical treatment.
IEC 60027-3:1974 - Letter symbols to be used in electrical technology - Part 3: Logarithmic quantities and units
January 1, 1974 - IEC - International Electrotechnical Commission
A description is not available for this item. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059136509895325, "perplexity": 1388.0324590378405}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592523.89/warc/CC-MAIN-20180721110117-20180721130117-00530.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/202260-probability-theory-expected-vales-exponential-distribution.html | # Thread: probability theory-expected vales for exponential distribution
1. ## probability theory-expected vales for exponential distribution
Here's a problem that I got when I self-study probability theory with the course recorded at Harvard in stat 110.
A post office has 2 clerks. Alice enters the post office while 2 other customers,
Bob and Claire, are being served by the 2 clerks. She is next in line. Assume
that the time a clerk spends serving a customer has the Exponential(lambda) distribution.
(b) What is the expected total time that Alice needs to spend at the post
office?
the answer gives that the expected waiting time(waiting in line) is 1/(2lambda) and the expected time being served in 1/lambda, so the total time is 3/(2lambda)
I don't understand why the expected waiting time(waiting in line) is 1/(2lambda)
the solution says that the minimum of two independent
Exponentials is Exponential with rate parameter the sum of the two
individual rate parameters.
where does the rationale of the statement above come from.......
thanks..
2. ## Re: probability theory-expected vales for exponential distribution
Hey pyromania.
What you looking at comes from an area known as an order statistics. If you want to prove the result yourself calculate Min(A,B).
To start you off consider P[Min(A,B)] which is given by P(A > x and B > x) = P(C < x) = P(A > x)P(B > x) [independence] which is equal to [1 - P(A < x)][1 - P(B < x)] and P(A < x) is the CDF for A(x) and the P(B < x) is the CDF of B(x). Now differentiate both sides to get the PDF of C and compare the two. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145944714546204, "perplexity": 1619.269449081157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.18/warc/CC-MAIN-20170322212949-00639-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/173113/relationship-between-prime-factorizations-of-n-and-n1 | # Relationship between prime factorizations of $n$ and $n+1$?
Are there any theorems that give us any information about the prime factorization of some integer $n+1$, if we already know the factorization of $n$?
Recalling Euclid's famous proof for the infinity of the set of prime numbers, I guess we know that if $n = p_1 p_2 p_3$, then $n+1$ cannot have $p_1$, $p_2$, or $p_3$ as factors. But is there any way we could use the information about $n$'s factorization to determine something more precise about the factorization of $n+1$?
-
Seems unlikely, given the extreme cases of Fermat and Mersenne primes. – user17794 Jul 20 '12 at 4:50
In general I think not (though the experts will confirm this), but if $n$ has two factors (not necessarily prime) which differ by 2 so that $n=r(r+2)$ then $n+1=(r+1)^2$. – Mark Bennet Jul 20 '12 at 4:53
$n$ and $n+1$ are relatively prime, so no prime factor of one can be a prime factor of the other. But beyond that, there is no regularity; if $n$ is prime and different from $2$, then $n+1$ is certainly not prime; and conversely for $n+1\neq 3$. There is no known regularity in the functions $\omega(n)$, $\Omega(n)$. – Arturo Magidin Jul 20 '12 at 5:18
Currently very little is known about this problem and it appears intractable by known methods, though it is of great interest. More generally, additive number theory takes upon the challenge of studying the additive structure of prime numbers, which is bound to be difficult due to their inherent multiplicative nature.
Some problems that would greatly benefit from knowing how addition effects prime factorizations include: The Twin Prime Conjecture and The Collatz Conjecture.
-
As I wrote when this question was raised at MathOverflow, if knowing the factorization of $n$ told you much about the factorization of $n+1$, the Fermat numbers $2^{2^n}+1$ would be easy --- but, they aren't.
-
While the factorisation of $N$ might not help much with the factorisation of $N+1$, in special circumstances, it can help with determining primality of $N$.
Famous examples of this include Pépin's Test for primality of numbers of the form $2^{2^n}+1$, and Proth's Theorem for primality of numbers of the form $k \times 2^n+1$ where $k<2^n$ (Proth primes feature on the top 20 known primes).
There are more general primality tests for $N+1$ based on (partial) knowledge of the factorisation of $N$, but they tend to be less elegant. For example, this was snipped from "Factorizations of $b^n \pm 1$, b = 2, 3, 5, 6, 7, 10, 11, 12 Up to High Powers" by Brillhart, Lehmer, Selfridge, Tuckerman, and Wagstaff, Jr.:
Theorem 11. Let $N-1=FR$, where $F$ is completely factored and $(F,R)=1$. Suppose there exists an $a$ for which $a^{N-1} \equiv 1 \pmod N$ and $(a^{(N-1)/q}-1,N)=1$ for each prime factor $q$ of $F$. Let $R=rF+s$, $1 \leq s < F$, and suppose $N<2F^3+2F$, $F>2$. If $r$ is odd, or if $r$ is even and $s^2-4r=t^2$, then $N$ is prime. Otherwise, $s^2-4r=t^2$ and $$N = [\frac{1}{2}(s-t)F+1][\frac{1}{2}(s+t)F+1].$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9299774169921875, "perplexity": 207.70171840469098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166222.10/warc/CC-MAIN-20160205193926-00127-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-5-section-5-5-dividing-polynomials-exercise-set-page-385/17 | ## Introductory Algebra for College Students (7th Edition)
$100$
RECALL: The zero-exponent rule states that for any non-zero number $a$, $a^0=1$ Use the zero-exponent rule to obtain: $=100(1) \\=100$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199047446250916, "perplexity": 2596.358993898384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00527.warc.gz"} |
https://ejde.math.txstate.edu/Volumes/2010/153/abstr.html | Electron. J. Diff. Equ., Vol. 2010(2010), No. 153, pp. 1-7.
### Regularity for 3D Navier-Stokes equations in terms of two components of the vorticity Sadek Gala
Abstract:
We establish regularity conditions for the 3D Navier-Stokes equation via two components of the vorticity vector. It is known that if a Leray-Hopf weak solution satisfies
where form the two components of the vorticity, , then becomes the classical solution on (see [5]). We prove the regularity of Leray-Hopf weak solution under each of the following two (weaker) conditions:
where is the Morrey-Campanato space. Since is a proper subspace of , our regularity criterion improves the results in Chae-Choe [5].
Submitted May 20, 2010. Published October 28, 2010.
Math Subject Classifications: 35Q35, 76C99.
Key Words: Navier-Stokes equations; regularity conditions; Morrey-Campaanto spaces.
Show me the PDF file (219 KB), TEX file, and other files for this article.
Sadek Gala Department of Mathematics, University of Mostaganem Box 227, Mostaganem 27000, Algeria email: [email protected]
Return to the EJDE web page | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428481459617615, "perplexity": 2065.6368118887776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668585.12/warc/CC-MAIN-20191115042541-20191115070541-00029.warc.gz"} |
http://tex.stackexchange.com/questions/9510/changing-side-of-line-numbering-in-two-columns-documents?answertab=votes | # Changing side of line numbering in two columns documents
I'm trying to type some report and I have a small problem with the twocolums document class and the line numbering in listings.
The document is on two columns :
\documentclass[8pt,[...],a4paper,twocolumn]{article}
The listings can end up either on the left or the right column. The problem is: if I put
\lstset{numbers=left,frame=tb,[...]}
the space between the columns is not sufficient when the listing is on the right column and the text from the first column is written over.
Is there any way to ask the listings to put the line numbers "outside" ?
-
\documentclass[a4paper,twocolumn]{article}
\usepackage{listings}
\lstset{numbers=left,frame=tb,numbersep=1em,xleftmargin=2em,
basicstyle=\ttfamily\small}
\parindent=0pt
\begin{document}
\rule{\linewidth}{1pt}
\begin{lstlisting}
\def\showDiff#1#2{}%
\end{lstlisting}
\newpage
\rule{\linewidth}{1pt}
\begin{lstlisting}
\def\showDiff#1#2{}%
\end{lstlisting}
\end{document}
-
Could you explain what your code is doing? – Seamus Jan 23 '11 at 12:18
It produce two listings, one on each column, with the numbers on the left. But instead of increasing the space between the columns, it adds a margin for the numbers, so the listing is not as large as before without having narrower columns. Not quite the solution i was thinking about, but it produces a good result. – Thomas Schwery Jan 23 '11 at 17:24
Even though it doesn't exactly answer the question, it's the solution i'm using now ... – Thomas Schwery Jan 29 '11 at 21:53
Even if it were possible, for code listings I would find line numbers on the right hand side unclear and confusing. I suggest to increase the space between columns in your document:
\setlength{\columnsep}{25pt}
EDIT: Fixed embarassing typo in code sampling.
-
That's what i'm doing for now, but i would like to know if there was a way to change automatically the side of the numbering depending on the column the listing is in. – Thomas Schwery Jan 23 '11 at 13:00
the numbers are always on the left. See my example for setting it correct. – Herbert Jan 23 '11 at 15:02
You can try this:
\documentclass[a4paper,twocolumn]{article}
\usepackage[switch]{lineno}
\begin{document}
\linenumbers
\end{document}
-
But that only gives you line numbering, one of many features of the listings package. – Chris H Jan 30 '14 at 13:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511162042617798, "perplexity": 870.4303805348592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095423.29/warc/CC-MAIN-20150627031815-00025-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://greenleafngo.org/renewable-energy/ | Renewable Energy
Green Leaf, with its qualified and professionals team of volunteers, will identify those areas where possibilities of renewable energy could be implemented at various stages. Green Leaf Trust welcomes grass root innovators like root top solar and rain water harvesting of renewable energy and their R&D, startups, technologies and helps them to build a sustainable future.
Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat. Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services. Renewable energy resources exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency is resulting in significant energy security, climate change mitigation, and economic benefits. In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.
The government of India, through the Ministry of New and Renewable Energy (MNRE) has been activity involved in implementing various schemes for power generation through renewable sources like solar, wind, hydro and waste to power. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8831910490989685, "perplexity": 2401.264207514656}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00229.warc.gz"} |
http://mathoverflow.net/revisions/50005/list | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
Update. Here is an explicit example (which also preserves the $\mathbb Q$-span of the basis). Let $V=\ell_2$ over $\mathbb R$ and $(e_j)$ the standard basis. Define $T:V\to V$ by$$(Tx)_1 &= x_1+3x_2+x_3 \\ (Tx)_2 &= 3x_1+10x_2+6x_3+x_4 \\ (Tx)_n &= x_{n-2}+6x_{n-1}+11x_n+6x_{n+1}+x_{n+2}, \qquad n>2 .$$Let $\alpha$ be the root of $p(x)=x^2+3x+1$ such that $|\alpha|<1$. Then the vector $v\in\ell_2$ defined by $v_n=\alpha^n$ satisfies $Tv=0$, so it is an eigenvector for $\lambda=0$. On the other hand, no nonzero rational vector $w$ satisfies $Tw=0$ because the relation $Tw=0$ implies the recurrence relation w_{n+4}=-6w_{n+3}-11w_{n+2}-6w_{n+1}-w_n ,which does not have rational solutions tending to 0.
2 minor clarification
Let $V=\ell_2$ over $\mathbb R$ and $(e_j)$ the standard basis, so that $\langle x,e_j\rangle=x_j$ for every $x\in\ell_2$. Let $v\in V$ be a unit vector with irrational ratios of coordinates (for definiteness, take a geometric progression like $(c\pi^{-n})_{n\in\mathbb N}$). Then there exist a continuous self-adjoint $T:V\to V$ with rational coordinates such that $\ker T=\langle v\rangle$. So we have $\lambda=0$ but no $\lambda$-eigenvector is rational.
I represent $T$ by its (infinite) matrix $(t_{ij})_{i,j\in\mathbb N}$, $t_{ij}=\langle Te_i,e_j\rangle$. This matrix should be symmetric, and continuity of the resulting operator should be taken care of.
Begin with a self-adjoint $A$ such that $\ker A=\langle v\rangle$ with irrational components $a_{ij}$. For example, let $A$ be the projection to the orthogonal complement of $v$, so $a_{ij}=\delta_{ij}-v_iv_j$. I am going to approximate the matrix $(a_{ij})$ by a rational matrix $(t_{ij})$ such that the kernel stays the same.
Since $Av=0$, each row of our matrix is orthogonal to $v$. Approximate the first row $a_i=(a_{1i})$ by a rational vector $t_1=(t_{1i})$ such that $\langle t_1,v\rangle=0$ and $|t_{1i}-a_{1i}|$ is bounded by a rapidly decaying geometric progression (see below for details). Replace the first row and the first column of the matrix by this approximation. Then adjust the diagonal elements $a_{ii}$ so that the rows remain orthogonal to $v$, namely change $a_{ii}$ to $a_{ii}-(a_{1i}-t_{1i})v_1v_i^{-1}$. Recall that $v_i$ is a not-so-fast decaying geometric progression, so this adjustment is a small change.
Now remove the first row and the first column from the matrix, and the first component from $v$. Apply the same procedure to the truncated data: namely approximate the first remaining row and the first remaining column by a rational vector whose scalar product with $v$ is the same as before, then adjust the diagonal elements so that the scalar products of all rows with $v$ are preserved. Repeat ad infinitum. Note that every element of the original matrix is changed only finitely many times, and $v$ belongs to the kernel of the matrix after each step.
The approximations are controlled and the adjustments are bounded in terms of approximations, so the sum $\sum (a_{ij}-t_{ij})^2$ can be made arbitrarily small. This implies that $T$ is continuous and $\|T-A\|$ is small (say, less than 1/2), so we have $Tv=0$ and $\|Tw\|\ge\frac12\|w\|$ for all $w$ orthogonal to $w$. Therefore $\ker T=\langle v\rangle$. However $v$ cannot be rescaled to a rational vector.
It remains to show that every vector $w\in\ell_2$ can be approximated by a rational vector $w'$ such that $\langle w',v\rangle=\langle w,v\rangle$ and $w'_i-w_i$ is bounded by a small, fast decaying progression. Let $w'_1=w_1+\varepsilon_1$ be a rational approximation of $w_1$, then let $w'_2=w_2-\varepsilon_1v_1v_2^{-1}+\varepsilon_2$ be a rational approximation of $w_2-\varepsilon_1v_1v_2^{-1}$, then let $w'_3=w_3-\varepsilon_2v_2v_3^{-1}+\varepsilon_3$ be a rational approximation of $w_3-\varepsilon_2v_2v_3^{-1}$, and so on. The $\varepsilon_i$ at each step can be chosen arbitrarily small, and the resulting vector $w'$ satisfies $\langle w'-w,v\rangle=0$.
1
Let $V=\ell_2$ over $\mathbb R$ and $(e_j)$ the standard basis, so that $\langle x,e_j\rangle=x_j$ for every $x\in\ell_2$. Let $v\in V$ be a unit vector with irrational ratios of coordinates (for definiteness, take a geometric progression like $(c\pi^{-n})_{n\in\mathbb N}$). Then there exist a continuous self-adjoint $T:V\to V$ with rational coordinates such that $\ker T=\langle v\rangle$.
I represent $T$ by its (infinite) matrix $(t_{ij})_{i,j\in\mathbb N}$, $t_{ij}=\langle Te_i,e_j\rangle$. This matrix should be symmetric, and continuity of the resulting operator should be taken care of.
Begin with a self-adjoint $A$ such that $\ker A=\langle v\rangle$ with irrational components $a_{ij}$. For example, let $A$ be the projection to the orthogonal complement of $v$, so $a_{ij}=\delta_{ij}-v_iv_j$. I am going to approximate the matrix $(a_{ij})$ by a rational matrix $(t_{ij})$ such that the kernel stays the same.
Since $Av=0$, each row of our matrix is orthogonal to $v$. Approximate the first row $a_i=(a_{1i})$ by a rational vector $t_1=(t_{1i})$ such that $\langle t_1,v\rangle=0$ and $|t_{1i}-a_{1i}|$ is bounded by a rapidly decaying geometric progression (see below for details). Replace the first row and the first column of the matrix by this approximation. Then adjust the diagonal elements $a_{ii}$ so that the rows remain orthogonal to $v$, namely change $a_{ii}$ to $a_{ii}-(a_{1i}-t_{1i})v_1v_i^{-1}$. Recall that $v_i$ is a not-so-fast decaying geometric progression, so this adjustment is a small change.
Now remove the first row and the first column from the matrix, and the first component from $v$. Apply the same procedure to the truncated data: namely approximate the first remaining row and the first remaining column by a rational vector whose scalar product with $v$ is the same as before, then adjust the diagonal elements so that the scalar products of all rows with $v$ are preserved. Repeat ad infinitum. Note that every element of the original matrix is changed only finitely many times, and $v$ belongs to the kernel of the matrix after each step.
The approximations are controlled and the adjustments are bounded in terms of approximations, so the sum $\sum (a_{ij}-t_{ij})^2$ can be made arbitrarily small. This implies that $T$ is continuous and $\|T-A\|$ is small (say, less than 1/2), so we have $Tv=0$ and $\|Tw\|\ge\frac12\|w\|$ for all $w$ orthogonal to $w$. Therefore $\ker T=\langle v\rangle$. However $v$ cannot be rescaled to a rational vector.
It remains to show that every vector $w\in\ell_2$ can be approximated by a rational vector $w'$ such that $\langle w',v\rangle=\langle w,v\rangle$ and $w'_i-w_i$ is bounded by a small, fast decaying progression. Let $w'_1=w_1+\varepsilon_1$ be a rational approximation of $w_1$, then let $w'_2=w_2-\varepsilon_1v_1v_2^{-1}+\varepsilon_2$ be a rational approximation of $w_2-\varepsilon_1v_1v_2^{-1}$, then let $w'_3=w_3-\varepsilon_2v_2v_3^{-1}+\varepsilon_3$ be a rational approximation of $w_3-\varepsilon_2v_2v_3^{-1}$, and so on. The $\varepsilon_i$ at each step can be chosen arbitrarily small, and the resulting vector $w'$ satisfies $\langle w'-w,v\rangle=0$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920818209648132, "perplexity": 85.92091181083427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708711794/warc/CC-MAIN-20130516125151-00023-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/level-curves.184021/ | # Homework Help: Level curves
1. Sep 11, 2007
### cscott
1. The problem statement, all variables and given/known data
I need to sketch level curves of $T(x, y) = 50(1 + x^2 + 3y^2)^{-1}$ and $V(x, y) = \sqrt{1 - 9x^2 -4y^2}$
3. The attempt at a solution
Is it correct that they are ellipses?
ie $$1 = \frac{9}{1 - c^2} x^2 + \frac{4}{1 - c^2}y^2[/itex] for V(x, y) = c = constant I feel so rusty going back to school :s Last edited: Sep 11, 2007 2. Sep 11, 2007 ### HallsofIvy That's for V? Setting $V(x,y)= \sqrt{1- 9x^2- 4y^2}= c$, then $1- 9x^2- 4y^2= c^2$, $9x^2+ 4y^2= 1-c^2$, [tex]\frac{9}{1- c^2}x^2+ \frac{4}{1-c^2}y^2= 1$$
just as you say. Yes, that's an ellipse. It might be easier to recognise if you wrote it
$$\frac{x^2}{\left(\frac{\sqrt{1-c^2}}{3}\right)^2}+ \frac{y^2}{\left(\frac{\sqrt{1-c^2}}{2}\right)^2}= 1$$
an ellipse with center at (0,0) and semi-axes of length
$$\frac{\sqrt{1-c^2}}{3}$$
and
$$\frac{\sqrt{1-c^2}}{2}$$
Similarly, $T(x,y)= 50(1+ x^2+ 3y^2)^{-1}= c$ gives $c(1+ x^2+ 3y^2)= 50$ so $1+ x^2+ 3y^2= 50/c$, $x^2+ 3y^2= (50/c- 1)$. Now divide both sides by 50/c- 1:
$$\frac{x^2}{50/c-1}+ \frac{y^2}{\frac{50/c-1}{3}}= 1$$
again, an ellipse with center at (0,0), semi-axes of length
$$\sqrt{50/c- 1}$$
and
$$\sqrt{\frac{50/c- 1}{3}}$$
3. Sep 11, 2007
### cscott
That does make it easier to understand. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801549077033997, "perplexity": 2591.0996713576415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827769.75/warc/CC-MAIN-20181216143418-20181216165418-00060.warc.gz"} |
https://www.physicsforums.com/threads/second-order-geodesic-equation.725793/ | # Second order geodesic equation.
1. Nov 30, 2013
### ozone
Hello all,
I have a geodesic equation from extremizing the action which is second order. I am curious as to what the significance is of having 2 independent geodesic equations is. Also I was wondering what the best way to deal with this is.
2. Nov 30, 2013
### WannabeNewton
What do you mean by two independent geodesic equations? Are you working in a 2 dimensional space with coordinates $\{x^1,x^2\}$ and have two geodesic equations, one for each coordinate?
As an aside, usually the easiest way to deal with the geodesic equation is to not deal with it at all. What I mean by this is that if your space has obvious symmetries then just use first integrals of conserved quantities. It's the same thing as using conservation of energy instead of Newton's 2nd law for classical mechanics problems.
3. Nov 30, 2013
### ozone
Sorry I should have been more clear, I believe that I have two independent solutions to the geodesic equation for a single direction, but perhaps I am misinterpreting the result. The equation is written as
$\ddot{x}(\tau)^a = A_{ab}(\tau) x(\tau)^b$
Here a,b are the two orthogonal directions to the wavefront of a pp-wave. Luckily I am in a system where $A_{ab}$ is an orthogonal matrix. This was derived using some symmetries and conversations with the Lagrangian. However I am having trouble interpreting what it means to have a second order geodesic equation (when we write down our geodesic equation in terms of Christoffel symbols it is always first order).
4. Nov 30, 2013
### WannabeNewton
Perhaps we're using different definitions of the geodesic equation but AFAIK it is always second order in $\tau$: $\ddot{x}^{\mu} = -\Gamma ^{\mu}_{\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta}$.
5. Nov 30, 2013
### ozone
True but the equation above should only have one independent solution as best I can ascertain.. Suppose for simplicity we had a diagonal connection coefficient which is valued at 1 in some $\bar{x}$direction, I don't see how that is different from writing $\dot{x}= x^2$ (by substituting$x = \dot{\bar{x}}$),this is only a first order equation... or am I missing something blatant?
6. Dec 2, 2013
### Bill_K
This is one of the standard tricks/methods for solving differential equations. Define p = dx/dt and hope that you can write a DE containing p alone. If so, it will be first order and you can solve it to get p(t). There will be one constant of integration.
But then you still have to solve p = dx/dt to get x(t), and this will produce a second constant of integration. It's a second order DE, you haven't changed that, all you have done is to solve it in stages.
7. Dec 2, 2013
### ozone
Fair enough, I agree with what you are saying. My main question then is what do we do with these constants of integration? May we just arbitrarily set them equal to one?
8. Dec 2, 2013
### Bill_K
E.g. they can be used to specify the two initial conditions for the geodesic: the initial position and velocity.
Similar Discussions: Second order geodesic equation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059081077575684, "perplexity": 335.6072949575459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00608.warc.gz"} |
http://www.computer.org/csdl/proceedings/hicss/2003/1874/09/187490301b-abs.html | Subscribe
Big Island, HI, USA
Jan. 6, 2003 to Jan. 9, 2003
ISBN: 0-7695-1874-5
pp: 301b
Qun Li , Dartmouth College
Javed Aslam , Dartmouth College
Daniela Rus , Dartmouth College
ABSTRACT
This paper discusses several distributed power-aware routing protocols in wireless ad-hoc networks (especially sensor networks). We seek to optimize the lifetime of the network. We have developed three distributed power-aware algorithms and analyzed their efficiency in terms of the number of message broadcasts and the overall network lifetime modeled as the time to the first message that can not be sent. These are: (1) a distributed min Power algorithm (modeled on a distributed version of Dijkstra?s algorithm), (2) a distributed max-min algorithm, and (3) the distributed version of our the centralized online max - zP<sub>min</sub> algorithm presented in [12]. The first two algorithms are used to define the third, although they are very interesting and useful on their own for applications where the optimization criterion is the minimum power, respectively the maximum residual power. The distributed max - zP<sub>min</sub> algorithm optimizes the overall lifetime of the network by avoiding nodes of low power, while not using too much total power.
INDEX TERMS
null
CITATION
Qun Li, Javed Aslam, Daniela Rus, "Distributed Energy-conserving Routing Protocols", HICSS, 2003, 36th Hawaii International Conference on Systems Sciences, 36th Hawaii International Conference on Systems Sciences 2003, pp. 301b, doi:10.1109/HICSS.2003.1174850
20 ms
(Ver 2.0)
Marketing Automation Platform | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9382014870643616, "perplexity": 4104.504323192457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00229-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.worldweatherattribution.org/trends-in-weather-extremes-february-2018/ | # Trends in weather extremes
#### 28 February, 2018
##### News
Twenty years ago, the trend in annual mean global mean temperature became detectable. Ten years ago, robust regional seasonal mean temperature trends similarly started to emerge. Nowadays, we can see trends even in weather extremes.
Studying these trends is an essential step in the extreme event attribution procedure we use at World Weather Attribution. Over the years, we have collected a fair number of results in these analyses and other articles. In this article, I take a step back and consider global long-term meteorological station data for hot, cold, and wet extremes, and share some thoughts on tropical cyclones and droughts. I make no claims for completeness; there is a lot of literature on this that I do not know.
The obvious first-order hypothesis is that warm extremes are getting warmer and cold extremes less cold. Severe precipitation tends to increase due to the higher moisture content of warmer air. Sea level rise simply heightens storms urges. Other extremes do not have as obvious first-order trends.
The figures are based on GHCN-D station data from NOAA/NCEI and can easily be reproduced on the KNMI Climate Explorer. They have at least 50 years of data and a minimum distance between station of 0.5º. The daily average temperature was chosen, defined as the average of maximum and minimum temperatures. This quantity is less sensitive to changes in observing practices and surroundings than either minimum or maximum temperatures alone, e.g., decreased ventilation due to trees growing affects minimum and maximum temperatures with opposite signs. There are no obvious urban/rural contrasts in the maps, so they mainly reflect large-scale trends.
Note that we compute trends as a function of the smoothed global average temperature, rather than simply time, since our first-order hypotheses are related to warming.
### Heat extremes
In this analysis a heat extreme is simply defined as the highest daily average temperature of the year. Our trend analysis shows that almost everywhere these heat extremes are now warmer than a century ago, following the obvious first-order connection with global average temperature. We encountered two exceptions in our work: in the eastern U.S. heat extremes now are roughly as warm as they were during the Dust Bowl of the 1930s, when the severe drought heightened the temperature of hot days. In India, where there has generally been no trend since the 1970s, we find that increasing air pollution and irrigation counteracted the warming trend due to greenhouse gases over that period (van Oldenborgh et al, 2018). In the rest of the world, the increases are typically large, with the temperature of the hottest day of the year rising much faster than the global mean temperatures in most regions (values above one in Figure 1).
The map of Figure 1 also shows the sparsity of stations with long daily temperature records that are publicly available in the tropics. This is mainly a result of the data being sold commercially. However, in this region, temperature extremes are also often not acknowledged, although there is evidence that they pose a major health hazard (see Gerland et al, 2015 for Africa).
### Cold extremes
For cold extremes, the daily average temperature of the coldest day of the year is considered. Figure 2 shows that these cold events heat up even faster than the heat extremes, up to a factor of five times the global mean temperature (see e.g., van Oldenborgh et al, 2015 and the WWA cold wave analyses). The strongest increases over land are in Siberia and Canada. Winter temperatures are very low there due to radiative cooling over snow under a clear sky, with strong vertical gradients in the lowest meters of the atmosphere. These stable boundary layers are sensitive to perturbations, probably also to the extra downward longwave radiation due to greenhouse warming. This may explain the strength of the observed trends. Further south, the cold air from the north is simply less cold, also due to the well-understood Arctic amplification over the Arctic Ocean. Note that the current climate models do not have the resolution to simulate this properly and underestimate the trends.
### Precipitation extremes
As a measure of extreme precipitation, we take the highest daily precipitation of the year. This is a measure relevant for local flooding. For flood events over larger basins, the time over which to compute the total amount should be longer, while flash floods may be caused by shorter events, making hourly totals more relevant. The daily total is a useful measure, in the middle of the relevant temporal range. Similar maps can be made for longer times scales via the Climate Explorer. We computed trends after transforming precipitation amounts by taking their logarithm, a transformation that makes the variable more like a normal distribution so that the trend is mathematically better defined. It also ensures that the precipitation remains positive. For the trends shown here, these are almost the same as relative trends in the precipitation itself.
Note that also for precipitation, rather than computing trends in time, we compute trends as a function of global average temperature. Figure 3 shows that the highest daily mean of the year has increased at more stations than it has decreased. This was already found by Westra et al (2013). The average increase is similar to the increase of the amount of water the atmosphere can hold at higher temperatures (the ClausiusClapeyron relation), about 7% per degree Celsius. However, there is a large spread around this average. A large part of this is random weather: even in >50-year series, the variability is large compared to the trends. In some areas, there are systematic deviations from Clausius-Clapeyron due to other effects of climate change. Some examples from my own work are listed below. The trend in Colorado is lower than Clausius-Clapeyron, probably due to higher air pressure during the season of most extremes (Eden et al, 2016). Drying trends also suppress extreme precipitation, such as in summer in the Mediterranean region. However, in autumn extremes increase strongly at one mountain range there (Vautard et al, 2015). Some winter extremes in northwestern Europe are found to increase more strongly than Clausius-Clapeyron due to the increase in zonal circulation types (e.g., van Haren et al, 2013, Schaller et al 2016), but others do not (e.g., Otto et al, 2018).
To conclude, on average daily precipitation extremes increase in intensity, but local trends are often different from the global average. We’ll be busy studying these regional trends for some time.
### Tropical storms
It is hard to determine trends in the number and intensity of tropical cyclones (called hurricanes in the North Atlantic). This is because the observing system has been improved so much over the last 150 years, that more storms are detected and the most intense parts of it are more likely to be measured (Vecchi and Knutson, 2011). There is also strong decadal variability in cyclone activity in many regions. All that said, most damage of tropical cyclones is caused by water: extreme rain and storm surges. Both observations and modelling show that extreme precipitation associated with hurricanes sees large increases. For the U.S. Gulf Coast, we found an increase of about 15% in extreme precipitation, including from non-cyclone events, over the last century (van der Wiel et al, 2017; van Oldenborgh et al, 2017). Storm surges are trivially higher due to sea level rise. This means that even if the theoretically expected increase in the most intense tropical cyclones is not yet detectable, their physical impacts have increased substantially already.
### Drought
Trends in drought strongly depend on the definition of drought. There are three common ones: meteorological drought, which is simply an absence of rain; agricultural drought, which is a deficit of soil moisture and thus includes evaporation (and sometimes irrigation); and hydrological drought, which also includes the transport of water. Trends in meteorological droughts are often hard to determine: drought is only a problem if the variability is large relative to the mean, but that also implies that natural variability is large compared to the trend (e.g., Philip et al, 2018, Uhe et al, 2017). Hydrological droughts, such as the one in California, can be caused by an increase in temperature rather than precipitation. This reduces the snowpack in spring and thus causes a shortage of stored water in the dry summer (e.g., Mote et al, 2016). Socio-economic drought, that is shortage of water for common use by society, is often caused by increased water use rather than decreased availability (e.g., Otto et al, 2015). It is, therefore, very hard to make general statements about drought.
### Conclusions
Observations of weather extremes show the expected long-term trends in line with the increase of the global average temperature: almost everywhere hotter heat extremes, almost everywhere less frigid cold extremes, in general more intense precipitation, but with variations from region to region, and more damage from hurricanes through more precipitation and higher storm surges. Other extremes are not so simply related to climate change, and we are undertaking background research to make rapid attribution of these extremes possible.
Thanks to Claudia Tebaldi for improvements to the text. Previous versions were published as KNMI klimaatbericht and on the Climate Lab Book blog. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442754745483398, "perplexity": 1686.7202990973412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00088.warc.gz"} |
https://planetmath.org/MaschkesTheorem | Maschke’s theorem
Let $G$ be a finite group, and $k$ a field of characteristic not dividing $|G|$. Then any representation $V$ of $G$ over $k$ is completely reducible.
Proof.
We need only show that any subrepresentation has a complement, and the result follows by induction.
Let $V$ be a representation of $G$ and $W$ a subrepresentation. Let $\pi:V\to W$ be an arbitrary projection, and let
$\pi^{\prime}(v)=\frac{1}{|G|}\sum_{g\in G}g^{-1}\pi(gv)$
This map is obviously $G$-equivariant, and is the identity on $W$, and its image is contained in $W$, since $W$ is invariant under $G$. Thus it is an equivariant projection to $W$, and its kernel is a complement to $W$. ∎
Title Maschke’s theorem MaschkesTheorem 2013-03-22 13:21:16 2013-03-22 13:21:16 bwebste (988) bwebste (988) 9 bwebste (988) Theorem msc 20C15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 18, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840502142906189, "perplexity": 191.06355055696923}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00222.warc.gz"} |
https://tel.archives-ouvertes.fr/tel-00142413 | # THERMOMETRY AND COHERENCE PROPERTIES OF A ULTRACOLD QUANTUM GAS OF METASTABLE HELIUM
1 laboratoire Charles Fabry de l'Institut d'Optique / Optique atomique
LCFIO - Laboratoire Charles Fabry de l'Institut d'Optique
Abstract : In 2001 metastable Helium (He*) attained Bose-Einstein condensation (BEC). The metastable state has a lifetime of 9000 sec and an internal energy of 20 eV. This energy can be used to detect individual atoms using a micro-channel plate. The extremely good time response and high gain of this detector makes it possible to carry out a density correlation measurement (HBT) with massive particles similar to the pioneering experiment of R. Hanbury Brown and R. Twiss in optics. In addition, inelastic collisions between He* atoms produce a small but detectable flux of ions proportional to the cloud's density. This allows one to follow the evolution of the cloud's density toward BEC, passing through the phase transition, in real time and in a non invasive way.
In this dissertation we report on three different experiments: i) the determination of the two- and three-body ionizing rate constants of He*; ii) the determination of a, the He* scattering length; iii) the measure of the intensity correlation function of a falling He* cloud. It has been shown lately that our measure of a was affected by a large systematic error and we propose a possible explanation. We describe methods to determine the temperature and fugacity of a thermal cloud. Finally a major portion of the thesis is devoted to the derivation of an analytical expression for the intensity correlation function of the atomic flux. This theoretical analysis has derived typical values for the transverse and longitudinal atomic coherence length that confirmed the possibility of performing a HBT experiment with our apparatus.
Keywords :
Document type :
Theses
Atomic Physics. Université Paris Sud - Paris XI, 2007. English
Domain :
https://pastel.archives-ouvertes.fr/tel-00142413
Contributor : Jose Viana Gomes <>
Submitted on : Wednesday, April 18, 2007 - 10:50:53 PM
Last modification on : Friday, March 27, 2015 - 4:02:23 PM
### Identifiers
• HAL Id : tel-00142413, version 1
### Citation
José Carlos Viana Gomes. THERMOMETRY AND COHERENCE PROPERTIES OF A ULTRACOLD QUANTUM GAS OF METASTABLE HELIUM. Atomic Physics. Université Paris Sud - Paris XI, 2007. English. <tel-00142413>
Consultation de
la notice
## 320
Téléchargement du document | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425823450088501, "perplexity": 2720.673897766052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459005968.79/warc/CC-MAIN-20150501054325-00080-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/67194/command-to-remove-just-deeper-level-headlines-from-the-toc | # Command To Remove Just Deeper Level Headlines From The ToC
When using *in a headline you remove the section from the toc. But sometimes you just want to remove a certain level of headlines. Rather than to remove every single headline individually, it would be more pleasant to change one superordinate headline to serve that purpose.
I was wondering if it would be possible to create a *-based command that removes deeper level headlines without removing the level to where it is applied.
For example:
\section*{asdf}
\subsection{fdsa}
\section{B}
\subsection{C}
The asdf-headline will be displayed in the toc while the subsection will be removed. Both, section B and section C are part of the toc either.
Ideally, the command would not redefine the \section*{but would offer an alternative to allow being used at the same time as with an other command that was recently defined (Removing Subordinated Headline Levels From The ToC Automatically When Using *).
-
This question is a little unclear. What do you hope to achieve that the tocvsec2 package doesn't already do? See especially the \maxtocdepth and \settocdepth commands, which should be used after \begin{document}. – jon Aug 15 '12 at 0:03
I am also curious about this sentence: 'Both, section B and section C are part of the toc either.' It almost seems to be missing a 'not' (i.e., 'are not part of the toc either'), which is a very different question! – jon Aug 15 '12 at 3:51
Perhaps this does what you need (based on your code snippet):
\documentclass{report}
\usepackage{tocvsec2}
\begin{document}
\maxtocdepth{subsection}
\tableofcontents
\section*{asdf}
\subsection{fdsa}
\settocdepth{chapter} % <-- comment this line to see the difference
\chapter{A}
\section{B}
\subsection{C}
\settocdepth{subsection}
\chapter{D}
\section{E}
\subsection{F}
\end{document} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241593480110168, "perplexity": 2115.93080315406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926964.7/warc/CC-MAIN-20150521113206-00067-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3508666/triangle-interpolation-with-6-control-points | # Triangle interpolation with 6 control points?
Through a costly simulation, I am able to calculate the value of a function at several discrete points on a plane. My task now is to interpolate, to find the values at all points of the grid. (It is a simulation of a sheet of rubber, with the sheet tessellated with a triangle grid.)
Now I have seen some similar questions on the site, on how to interpolate within a triangle, and the consensus seems to be to use barycentric coordinates. I have implemented this, and the result is shown on the left/top. For comparison, a result using the inverse of the distance to get the weights resulted in the picture on the right/bottom.
(I have not explicitly visualised the nodes, but from the second picture it's pretty obvious, where they are.)
Now, although the result with barycentric coordinates is not bad, esp. when compared to the other result, I'm not completely satisfied. Notice how the triangular/hexagonal structure is very visible, for example the bright lines emanating from the yellow spot.
My question: is there a better weighting function?
I strongly assume there is nothing better that only takes the 3 given control points into consideration, but I was wondering if there is a weighting function that uses the 6 nearest control points, so A-F instead of A-C in this figure:
Though there is a section on 'generalised' barycentric coordinates on the wikipedia site, I can't say it's something I understand how to apply. I've also looked at bicubic interpolation, but that is only possible on a square or rectangular grid.
Many thanks! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014256358146667, "perplexity": 344.1537312124626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00175.warc.gz"} |
http://wwww.thermalfluidscentral.org/encyclopedia/index.php/Maxwell_Relations | # Maxwell relations
(Redirected from Maxwell Relations)
The fundamental thermodynamic relation for a reversible process in a single-component system, where the only work term considered is pdV, is obtained from eq. $dE \le TdS - \delta W$ from Thermodynamic property relations, i.e.,
$dE = TdS - pdV\qquad \qquad(1)$
which can also be rewritten in terms of enthalpy (H = E + pV), Helmholtz free energy (F = ETS), and Gibbs free energy (G = HTS) as
$dH = TdS + Vdp\qquad \qquad(2)$
$dF = - SdT - pdV\qquad \qquad(3)$
$dG = - SdT + Vdp\qquad \qquad(4)$
which all have the form of
$dz = Mdx + Ndy\qquad \qquad(5)$
Where
$M = {\left( {\frac{{\partial z}}{{\partial x}}} \right)_y}\qquad \qquad(6)$
$N = {\left( {\frac{{\partial z}}{{\partial y}}} \right)_x}\qquad \qquad(7)$
and dz is an exact differential, as thermodynamic properties like E,H,F, and G are path-independent functions.
Since eq. (5) is the total differential of function z,M and N are related by
${\left( {\frac{{\partial M}}{{\partial y}}} \right)_x} = {\left( {\frac{{\partial N}}{{\partial x}}} \right)_y} = \frac{{{\partial ^2}z}}{{\partial x\partial y}}\qquad \qquad(8)$
Applying eq. (8) to eqs. (1) – (4), the following relationships are obtained:
${\left( {\frac{{\partial T}}{{\partial V}}} \right)_S} = - {\left( {\frac{{\partial p}}{{\partial S}}} \right)_V} \qquad \qquad (9)$
${\left( {\frac{{\partial T}}{{\partial p}}} \right)_S} = {\left( {\frac{{\partial V}}{{\partial S}}} \right)_p}\qquad \qquad(10)$
${\left( {\frac{{\partial S}}{{\partial V}}} \right)_T} = {\left( {\frac{{\partial p}}{{\partial T}}} \right)_V}\qquad \qquad(11)$
${\left( {\frac{{\partial S}}{{\partial p}}} \right)_T} = - {\left( {\frac{{\partial V}}{{\partial T}}} \right)_p}\qquad \qquad(12)$
which are referred to as Maxwell relations. The goal of Maxwell relations is to find equivalent partial derivatives containing p,T, and V that can be physically measured and therefore provide a means of determining the change of entropy, which cannot be measured directly.
## References
Faghri, A., and Zhang, Y., 2006, Transport Phenomena in Multiphase Systems, Elsevier, Burlington, MA. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309813976287842, "perplexity": 1551.1298256965872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00191.warc.gz"} |
https://pixmob.info/what-is-the-square-root-of-79/ | # What Is The Square Root Of 79
What Is The Square Root Of 79. The term can be written as ²√75. 79 or 75. 79^1/2. As the index 2 is even and 75. 79 is greater than 0, 75. 79 has two real square roots: ²√75. 79, which is positive and called principal square root. Find the square root of 79. 18.
## Square Root of 79 – How to Find the Square Root of 79? – Cuemath
√ is called radical symbol or radical only. Second root of 6. 79 =. 4 rowssquare root of 79.
2 is the index. The radicand is the number below the radical sign. Square root = ±2. 6057628442.
## Square Root of 79 – How to Find the Square Root of 79? – Cuemath
25 rowsproof that the square root of 79 is 8. 88819441731559. The square root of 79 is defined as the. 25 rowswhat is square root?
The square root of 79 is expressed as √79 in the radical form and. A perfect square is a number with an integer as its square root. This means that it’s a product of an integer with itself. In decimal representation, the square root of 72 is 8. 485 when rounded to.
### SQUARE ROOT OF 79
#inhindi vargmul kaise nikalte hain #squarerootof √1000 youtu.be/GZImvqJS7io √361 youtu.be/w5A1ldaep1o √23 youtu.be/ThD8xf_24zE √676 youtu.be/5GyIqP1tYrA √441 youtu.be/p9kuUFlFG7k v108 youtu.be/Nd4ihDeL-RE √729 youtu.be/F5aT2Ae78vE √500 youtu.be/9DmQzU-cjOY √216 youtu.be/sBF4NBo6rR0 √10000 youtu.be/MBRGech-kNE Find The Square Root youtu.be/kbaKXZDrZlQ √1225 youtu.be/vIQS_xRA2bY √1296 youtu.be/6qJ_lO9fApw √1024 youtu.be/IRmWgrKrtko √841 youtu.be/aordsEZ46Lw 1 vargmul kaise nikale Teacher Name- Surendra Khilery . √1225 youtu.be/vIQS_xRA2bY √1296 youtu.be/6qJ_lO9fApw √1024 youtu.be/IRmWgrKrtko √841 …
## Conclusion
🐯 Step-by-step explanation of how to find the square root of 79. I'll show you how to simplify the square root, write it in simplest radical form, and then how to approximate the decimal value. Here are other ways to write this problem: What is the square root of 79? How do you find the square root of 79? sqrt(79) √79 \sqrt { 79 } The square root of (79). What Is The Square Root Of 79.
What Is The Square Root Of 79. Use this calculator to find the principal square root and roots of real numbers. Inputs for the radicand x can be positive or negative real numbers. The answer will also tell you.
Hi, I'm Kimberly. I have a ton of nicknames.. but you can call me Aki. When Kimberly is not scrapping, she is traveling, blog-hopping, reading, or editing photo shoots. And doing laundry! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357294201850891, "perplexity": 1611.9533601294297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00380.warc.gz"} |
https://www.physicsforums.com/threads/youngs-modulus-question.216393/ | # Young's Modulus Question
1. Feb 19, 2008
### robsmith82
I seem to be going a bit brain dead here, but in youngs modulus, how are the units in GPa as they are in one of my product specs I've just received?
Lets say we have stress in kN/m^2 and strain has no units, then surely E should be in kN/m^2?
2. Feb 19, 2008
### Mute
Pressure is measured in Pascals, and Pressure is defined as a force per unit area. So, a Pascal is really one Newton per unit area: N/m^2, for instance. So, kN/m^2 = kPa, which of course is just 10^6 GPa.
3. Feb 19, 2008
Doh.
Thanks! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930944204330444, "perplexity": 2684.169702917065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00393.warc.gz"} |
https://math.stackexchange.com/questions/2957570/applying-gronwall-lemma-in-majda-bertozzi-book | # Applying Gronwall lemma in Majda-Bertozzi book.
I am studying Majda-Bertozzi book about incompressible flows. I have applied Gronwall lemma several times, but I do not know how to do in the following case: We have $$|\nabla v(\cdot,t)|_{L^{\infty}}\le C\left(1+\int_0^t|\nabla v(\cdot,s)|_{L^{\infty}}\right)\left(1+|w(\cdot,t)|_{L^{\infty}}\right)$$ and we must use Gronwall lemma to get: $$|\nabla v(\cdot,t)|_{L^{\infty}}\le|\nabla v_0|_0 \exp\left(\int_0^t|w(\cdot,s)|_{L^{\infty}}ds\right).$$ Thanks a lot!
• Where is this, presumably the energy methods chapter? You must need some constant in the final line right – Calvin Khor Oct 16 '18 at 7:46
• Probably yes. The goal (the proof of the theorem 3.6) is achieved if there is a constant in the final line right. But I do not know how to obtain the rhs of the conclusion. – Alex Oct 16 '18 at 7:57
• If I understand correctly, this is from the paper of Beale-Kato-Majda but the inequality in the paper there is different. projecteuclid.org/euclid.cmp/1103941230 maybe you can figure something out – Calvin Khor Oct 16 '18 at 8:56
$$q(t) \le c(t) + \int_0^t u(s) q(s) ds\implies q(t) \le c(0) \exp\left(\int_0^t u(s) ds\right) + \int_0^t c'(s)\left(\exp\int_s^t u(\tau)d\tau \right)ds$$ To put it in this form, you can set $$q(t) = \frac{|\nabla v(t)|_{L^\infty}}{1+|\omega(t)|_{L^\infty}}, u(t) = C(1+|\omega(t)|_{L^\infty}), c(t) = C$$ Then we have $$\frac{|\nabla v(t)|_{L^\infty}}{1+|\omega(t)|_{L^\infty}} \le C\exp\left(C\int _0^t 1+ |\omega(s)|_{L^\infty}ds \right)$$ i.e. $$|\nabla v(t)|_{L^\infty} \le C (1+|\omega(t)|_{L^\infty})\exp\left(C\int _0^t 1+ |\omega(s)|_{L^\infty}ds \right) = \frac{d}{dt} \exp \left(C\int _0^t 1+ |\omega(s)|_{L^\infty}ds \right)$$ And hence $$\int_0^t |\nabla v(s)|_{L^\infty} ds \le C\exp \left(C\int _0^t 1+ |\omega(s)|_{L^\infty}ds \right)$$ which you can plug into the $$H^m$$ energy estimate (3.79), $$\|u(T)\|_m \le \|u_0\|_m \exp\left(c_m\int_0^T|\nabla v(t)|_{L^\infty} dt \right)$$ to get the required a priori estimate. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857468008995056, "perplexity": 259.35521079374695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00493.warc.gz"} |
https://www.physicsforums.com/threads/neutron-decay.63186/ | # Neutron decay
1. Feb 8, 2005
### m00nd0g68
A neutron at rest in the laboratory spontaneously decays into a proton, an electron, and a small essentially massless particle called a neutrino. Calculate the kinetic energy of the proton and the electron in each of the following cases:
a) the neutrino has no kinetic energy
b) the neutrino has 300keV of kinetic energy and is traveling opposite the proton and the same direction as the electron
c) the neutrino has 300keV of kinetic energy and is traveling perpendicular to the proton and electron, which are traveling opposite of each other.
I have solved part a. But part b I am having a problem with. I understand that the neutrino’s E = K + E0 = K + mc2 = K = 300keV (where mc2 = 0 because it has no mass) but I don’t know how to use this with the energies of the other two. Do I simply subtract the energies of the proton, electron and the given K of the neutrino to get an amount that is a new total energy? On part c how do I handle the 300keV of the neutrino that is traveling perpendicularly to the proton and electron?
Any hints would be appreciated…
moondog
2. Feb 8, 2005
### dextercioby
Though it's obviouly a relativistic problem,maybe some of the ole tricks from Newtonian mechanics would help.Do a vector (momenta) diagram and pay attention when u write the conservation of total momentum,ie.write it in vector form & pay attention with the projections.
This valid for point "b" and especially for point "c".
For the record:it's a (massless) electronic ANTIneutrino.
Daniel.
3. Feb 9, 2005
### Davorak
For relativistic mechanics you have to use
$$E^2 = p^2 +m^2$$
In natrual units, instead of the old E1 + E2 ..En = Etot, you get:
$$E_{after}=E_{before}]$$
$$\sqrt{p_{after 1}^2 + m_{after 1}^2} +\sqrt{p_{after 2}^2 + m_{after 2}^2}... = \sqrt{p_1^2 + m_1^2} +\sqrt{p_2^2 + m_2^2}...$$
Also you still have momentum conservation,
$$P_{after \ 1} + P_{after \ 2} ... = P_1 + p_2 ...$$
Hope this helps. There is the four vector notation for this math that simplifies some of this maybe someone else can post it.
4. Feb 9, 2005
### m00nd0g68
My results so far...
For part a I calculated a kinetic energy of 0.782MeV for the electron and 0.752keV for the proton.
For part b I calculated a kinetic energy of 0.483MeV for the proton and 0.387keV for the proton.
Lastly, for part c I calculated 1.08MeV for the electron and 1.21keV for the proton.
Does that make sense?
moondog
5. Feb 9, 2005
### dextercioby
Are you sure with the numbers,i mean,taking into account the mass ratio,the electron would be 1 million times faster than the proton...
Daniel.
6. Feb 10, 2005
### m00nd0g68
Perpindicular neutrino travel
The main part I am not understanding is how to deal with the perpendicular travel of the neutrino. Since E=K+E0=K+mc^2 and the neutrino is massless this means that E=K=.300MeV. How do I deal with this in the perpendicular direction?
moondog
7. Feb 10, 2005
### Staff: Mentor
There is total (kinetic + rest) energy, a scalar quantity.
An then there is momentum, a vector quantity, with components in 2 dimensions, e.g. x-direction and y-direction. The neutrino has momentum, in the direction of travel, which must equal the momentum components of the electron and proton, which are in the opposite direction.
See also - Neutrino Nuclear Physics (1.04 Mb, 50 pages, Japanese language support not necessary - download with save target as).
Have something to add?
Similar Discussions: Neutron decay | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438835740089417, "perplexity": 1452.674447601735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00499-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://hmf.enseeiht.fr/travaux/bei/beiep/content/2012-g16/droplets-distribution | Droplets distribution
Establishment of a model for the water bombing from an aircraft
When a drop is too big, it gives birth to smaller droplets according to several breakup regimes, depending on the Weber number :
$We=\frac{aerodynamic force}{surface tension force}=\frac{\rho_G \| \vec{U_G}-\vec{V_p} \|^2 d_p}{\sigma}$
Breakup regimes
Another adimensional number plays a role in the atomization process : the Bond number
$Bo=\frac{gravity force}{surface tension force}=\frac{\rho_G g {d_p}^2}{\sigma}$
To remain stable, a droplet has to satisfy the following conditions :
• $We_{drop} \le We_{cr}$
• $Bo_{drop} \le Bo_{cr}$
When it does not, the drop - called the "mother" drop - gives birth to small droplets, called "children" droplets. To simplify the problem here, we chose to consider the division of the "mother" drop into two "children" droplets. The centre of mass C of the new-formed droplet will be randomly located on a sphere around the centre of mass M of the "mother" drop, the radius being equal to the mother drop's diameter :
Spherical coordinates
Our approach consisted in studying each droplet, evaluating its Weber and Bond numbers. Big at the beginning, all the drops are divided into smaller droplets, until the "children" droplets satisfy the stability criteria. Hence, at the end, the stable droplets are all the same size.
The droplets distribution is as below :
Droplets distribution right after the second atomization
The next step, now, is to calculate the movement equation of every single drop, in order to evaluate the mark on the ground. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526749610900879, "perplexity": 2026.5604276416466}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864725.4/warc/CC-MAIN-20180522112148-20180522132148-00018.warc.gz"} |
http://www.ni.com/documentation/en/labview-comms/1.0/mnode-ref/pi/ | Version:
Represents the value of pi (3.1415926535897...). pi is defined as the perimeter of a circle divided by its diameter.
## Syntax
c = pi
Value of pi. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9240769743919373, "perplexity": 2731.766951837534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490806.45/warc/CC-MAIN-20190219162843-20190219184843-00037.warc.gz"} |
https://brilliant.org/problems/lattice-path/ | # Getting From Here To There On A Lattice Grid
Discrete Mathematics Level 5
What is 8 more than the mean area under a path of length 129 that goes only right or up through a square grid and starts at the origin?
Note: If the path ends at $$(X,Y)$$, then the area is bounded by the path, the $$x$$-axis, and the line $$x = X$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829860329627991, "perplexity": 352.7988214128812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719453.9/warc/CC-MAIN-20161020183839-00348-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/104142-can-you-colve-x.html | # Thread: can you colve for x?
1. ## can you colve for x?
Solve the equation for x.
Solve the equation for x.
First note that 256=4^4 so we change the question into
$4^{x-x^2}=\frac{1}{(4^4)^x}=\frac{1}{4^{4x}}$
Multiply both sides by $4^{4x}$ to get $4^{4x}\cdot 4^{x-x^2}=1$
$4^{4x+x-x^2}=1$
$4^{5x-x^2}=1$
And we need the exponent to be 0 since $x^0=1$ (you can take the log of both sides, but log 1=0 so we reach the same conclusion)
So now just solve $5x-x^2=0$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823032021522522, "perplexity": 1287.4456755325562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00302-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/0B1R | Lemma 43.23.3. Let $V$ be a vector space of dimension $n + 1$. Let $Y, Z \subset \mathbf{P}(V)$ be closed subvarieties. There is a nonempty Zariski open $U \subset \mathbf{P}(V)$ such that for all closed points $p \in U$ we have
$Y \cap r_ p^{-1}(r_ p(Z)) = (Y \cap Z) \cup E$
with $E \subset Y$ closed and $\dim (E) \leq \dim (Y) + \dim (Z) + 1 - n$.
Proof. Set $Y' = Y \setminus Y \cap Z$. Let $y \in Y'$, $z \in Z$ be closed points with $r_ p(y) = r_ p(z)$. Then $p$ is on the line $\overline{yz}$ passing through $y$ and $z$. Consider the finite type scheme
$T = \{ (y, z, p) \mid y \in Y', z \in Z, p \in \overline{yz}\}$
and the morphism $T \to \mathbf{P}(V)$ given by $(y, z, p) \mapsto p$. Observe that $T$ is irreducible and that $\dim (T) = \dim (Y) + \dim (Z) + 1$. Hence the general fibre of $T \to \mathbf{P}(V)$ has dimension at most $\dim (Y) + \dim (Z) + 1 - n$, more precisely, there exists a nonempty open $U \subset \mathbf{P}(V) \setminus (Y \cup Z)$ over which the fibre has dimension at most $\dim (Y) + \dim (Z) + 1 - n$ (Varieties, Lemma 33.20.4). Let $p \in U$ be a closed point and let $F \subset T$ be the fibre of $T \to \mathbf{P}(V)$ over $p$. Then
$(Y \cap r_ p^{-1}(r_ p(Z))) \setminus (Y \cap Z)$
is the image of $F \to Y$, $(y, z, p) \mapsto y$. Again by Varieties, Lemma 33.20.4 the closure of the image of $F \to Y$ has dimension at most $\dim (Y) + \dim (Z) + 1 - n$. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9977220296859741, "perplexity": 85.33707409939784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00250.warc.gz"} |
https://socratic.org/questions/what-are-some-examples-of-alternative-energy-resources | Earth Science
Topics
# What are some examples of alternative energy resources?
Feb 22, 2015
Examples of alternative energy sources besides fossil fuels (coal, natural gas, oil) could include harnessing the power of the sun (solar), wind, waves (hydro), or the earth itself (geothermal). These energy sources are considered 'renewable' energy sources, as they will not run out.
Nuclear power is also considered 'renewable' as the earth contains a limited amount of nuclear fuel, but there is enough to last for thousands of years. So while this energy source will eventually run out, it won't be for a very long time. However, although nuclear power does not produce any atmospheric pollution, there is still the matter of disposing of the radioactive nuclear fuel safely.
##### Impact of this question
253 views around the world | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853269457817078, "perplexity": 1522.0942004897058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256100.64/warc/CC-MAIN-20190520182057-20190520204057-00380.warc.gz"} |
http://math.stackexchange.com/questions/146241/there-are-no-simple-groups-of-order-2n-5-for-n-geq-4 | # There are no simple groups of order $2^n.5$ for $n\geq 4$.
How do you show that there are no simple groups of order $2^n\times 5$ for $n\geq 4$, without using the theorem that a finite group of order $p^nq^m$, where p, q are primes and $m,n\geq 1$ is not simple.
I have a hint to 'use the coset action determined by the Sylow 2-subgroup', but I'm not sure what this means.
-
If this is homework please tag this with the homework tag – Belgi May 17 '12 at 11:29
It's not homework, I'm just going through some questions that don't come with solutions. – 09867 May 17 '12 at 11:30
Hint: There are either 5 or 1 Sylow 2-subgroups (why?). Assuming there are 5 and we let $G$ act by conjugation on the Sylow 2-subgroups we get a non-trivial homomorphism from $G$ to $S_5$. What can we say about its kernel?
@09867 What's the order of G and what's the order of $S_5$. – JSchlather May 17 '12 at 12:20
$|S_5|=120$, $|G|=2^n.5$. So if $n\geq 4$ $|G|\geq 80$. So.. this homomorphism can't exist because 80 doesn't divide 120? – 09867 May 17 '12 at 12:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489374756813049, "perplexity": 228.82224260473626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829393.78/warc/CC-MAIN-20140820021349-00350-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/595545/about-saving-a-drawing-as-a-tikz-file | # About saving a drawing as a tikz file
This answer describes how to include a tikz file in your document, whether it is an article, a book or a beamer presentation.
It works fine, up until the point that you call for some other libraries.
Suppose I have the following graphics:
% File myDrawing.tex
\documentclass{standalone}
\usepackage{tikz}
\begin{document}
\usetikzlibrary{math} %needed tikz library
\tikzmath{\x1 = 1; \y1 =1; \x2 = \x1 + 1; \y2 =\y1 +3; }
% Using the variables for drawing
\begin{tikzpicture}
\draw[very thick, -stealth] (\x1, \y1)--(\x2, \y2);
\end{tikzpicture}
\end{document}
As you can see it consists of mainly a tikzpicture environment, but it also calls for the math tikz library, and declares variables \x1 and \y1 outside the tikzpicture environment.
As yet another example, consider this pgfplots part of code:
\documentclass{standalone}
\usepackage{pgfplots}
\usetikzlibrary{arrows.meta}
\begin{document}
\begin{tikzpicture}
\begin{axis}
\end{axis}
\end{tikzpicture}
\end{document}
It makes use of the arrows library and sets the pgfplots version, outside the tikzpicture environment, too.
Is it possible to create a file myDrawing.tikz that I can transplant in my main document, and also make use of the tikzscale package? If the answer is yes, then how? Can you post an example?
I am having difficulties with correctly and nicely (in)putting a drawing inside my document. Particularly with the height-to-width aspect ratio.
Until now I saved the drawing as a tex file, under a standalone document class and worked with includestandalone[width=...]{path/to/file} as described here. But sometimes I want the figure to be inside a \begin{columns} \column{0.4\textwidth} \end{columns} environment in a beamer presentation, and sometimes in the side margin of a book document, with a width=\marginparwidth option.
The tikzscale package looks very promising, but its documentation instructs its use only when working with tikz files.
• This sounds a bit complicated :) do you use the same drawings in many different documents? If not then adding the full code of the drawings in the document itself seems like the easiest solution - possibly combined with an editor that supports code folding if you think that all the code is cluttering the view. – Marijn May 3 at 19:04
• @Marijn I would give it a try, as I am working with Vim and folding is possible there. Can you show explicitly how? – tush May 3 at 19:05
• Most commands like \tikzset or \pgfmath can be used both inside or outside a tikzpicture environment. The only exception I know of is \tikzfadings (see tex.stackexchange.com/questions/277784/…). – John Kormylo May 3 at 19:14
• @John Kormylo I didn't know that! That works, and it sounds like a good answer to my question. – tush May 3 at 20:42
1. Enter full code. Set the foldmethod to manual.
2. Select tikzpicture enviroment with visual selection (ctrl-v)
3. Press zf to fold. Continue editing the rest of the document. If you want to go inside the fold, press zd.
4. When you exit Vim enter :mkview to save the folds. When you start Vim the next time the code shows unfolded. Enter :loadview to re-apply the folds from the previous session.
Note that you can of course automate most of the commands in your .vimrc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208361864089966, "perplexity": 3108.1386520411365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00012.warc.gz"} |
https://chemistry.stackexchange.com/questions/168054/why-would-a-reaction-be-nonspontaneous-at-higher-temperatures | # Why would a reaction be nonspontaneous at higher temperatures?
Typically we think of a higher temperature speeding up the reaction rate and/or supplying the activation energy of a reaction. So why is it the case that some reactions are only spontaneous at lower temperatures?
Using the gibbs free energy equation $$\Delta G = \Delta H - T \Delta S$$, If I have a reaction where $$\Delta H$$ is negative (exothermic?) and $$\Delta S$$ is negative it makes $$\Delta G$$ positive at higher temperatures which means the reaction is nonspontaneous at higher temperatures. Why would this be the case?
• LeChatelier's principle says a reaction that creates heat get's slower (and ultimately goes backwards) at higher temperature. And btw. it's totally arbitrary which one is the "forward" direction of any reaction.
– Karl
Sep 23, 2022 at 20:49
• @Karl Not slower, the net reaction ultimately is backwards. LeChatelier talks about equilibrium thermodynamics, not kinetics.
– Karsten
Sep 23, 2022 at 21:46
• @Karsten I agree, but Karl isn't completely wrong. If the equilibrium constant shifts, then the forward and backward rate constants do too (as $K = k_f/k_r$). A smaller equilibrium constant does indicate a slower forward reaction (and/or faster backward reaction). Sep 24, 2022 at 0:38
• You are mixing the thermodynamics and kinetics of a reaction, Thermodynamics just predicts if the reaction will happen or not, Kinetics predicts how fast will it happen, If the reaction is feasible then we look at its kinetics Oct 25, 2022 at 5:04
Thermodynamics studies systems at equilibrium so changing temperature moves the equilibrium from one position to another. For a given change we wish to know if the change of state will occur spontaneously or whether some amount of work is needed to make the change occur. In the gas phase this work usually called 'PV' work such as a change in volume. From the second law there must be overall entropy production so that the total change in entropy of the 'system' i.e. the reaction, plus that of the surroundings is always positive but which can be zero if the conditions are reversible. It can be shown using the First and Second Laws that the Gibbs free energy must decrease if the total entropy production is to be positive, i.e. the Gibbs free energy change must be negative for a spontaneous reaction.
When the temperature changes we can use the Van't Hoff Iscohore to predict what happens;
$$\frac{d\ln(K_p)}{dT}=\frac{\Delta H^{\text{o}}}{RT^2}$$
which when integrated (assuming that $$\Delta H^{\text{o}}$$ is independent of temperature over a small temp range which is often a good approximation) produces a gradient of $$-\Delta H^{\text{o}}/R$$ when $$\ln(K)$$ is plotted vs $$1/T$$.
(An aside. Notice that the entropy is not involved, the species involved are the same but in different proportions and there is always entropy of mixing which compensates. You can see this also from $$G=H-TS$$ where $$\displaystyle \left( \frac{\partial G}{\partial T} \right)_p = -S$$ is a constant as temperature changes.)
Thus for an exothermic reaction such as the formation of ammonia from hydrogen and nitrogen since $$\Delta H<0$$ the equilibrium constant $$K_p$$ must decrease as the temperature increases and less ammonia is in the equilibrium mixture as the temperature is increased, as is experimentally verified. The opposite is true for an endothermic reaction, i.e. dissociation of $$\ce{N_2O_4}$$.
If you have measured the forward and reverse rate constants, defining the equilibrium constant as the ratio of rate constants and using the Arrhenius equation produces
$$K_e= \frac{k_f}{k_b}=\frac{k_f^0}{k_b^0}e^{-(E_f-E_b)/RT}$$
where $$k_f, k_b$$ are the rate constants and $$E_f, E_b$$ the activation energies. An exothermic reaction has $$E_b > E_f$$ and the equilibrium constant decreases on increasing the temperature just as predicted by the thermodynamic argument.
Finally the Gibbs energy is related to the equilibrium constant as
$$\displaystyle \Delta G^{\text{o}}=-RT\ln(K_p)$$
so the sign of $$\Delta G^{\text{o}}$$, positive or negative, depends only on whether the equilibrium constant is less than or greater than one.
You expect both the forward and the reverse reaction to proceed faster at higher temperature. If the reverse reaction speeds up by a higher factor, this will affect the equilibrium.
"Spontaneous" is a technical term that does not reflect our day-to-day use of spontaneous. If you substitute the longer "goes forward or would have to go forward, starting from standard state, to attain equilibrium", it might make more sense.
If I start at standard state at a given temperature, either the forward reaction will be faster than the reverse reaction, or vice versa. If I repeat the experiment at higher temperature, I will see an increased forward rate and an increased reverse rate. These increases are typically not of the same magnitude, however.
So the direction that the reaction takes to reach equilibrium, starting from standard conditions, can be different depending on temperature. With respect to the direction of the reaction as written, it can change from spontaneous to non-spontaneous, or from non-spontaneous to spontaneous. If you write the reaction in the other direction, as Karl mentions, you come to the opposite conclusion.
In conclusion, there is always one net direction of the reaction that becomes "more favored", and the other net direction that becomes "less favored" as you increase the temperature and individual rates go up.
• Thank you for your answer. So basically what it means is that its opposite reaction would be favorable at the high temperature instead. Sep 24, 2022 at 17:11
• Yes, that is always the case. If a reaction is non-spontaneous, the reverse reaction is spontaneous. It just says that equilibrium is either reached in the forward or the reverse direction (unless we are already at equilibrium, in which case we don't see any net reaction).
– Karsten
Sep 24, 2022 at 17:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196216464042664, "perplexity": 470.43872558549486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00731.warc.gz"} |
https://physics.stackexchange.com/questions/616149/amperian-loop-and-magnetic-field | # Amperian loop and magnetic field
Suppose I have a long solenoid with $$n$$ turns per unit length. The current through the solenoid is changing and I am interested in finding the magnetic field $$(B)$$ inside. So, we choose an amperian path as in this figure and equate $$\oint B.dl= \mu_0\times nli$$ $$\Rightarrow B = \mu_0\times ni$$ and then differentiating wrt. time and using $$\oint E_{induced}.dl= -\frac{d\phi}{dt}$$ we get the E field at a distance $$r$$ from the centre of the loop equals $$\dfrac{\mu_0\times nr}{2}\times \dfrac{di}{dt}$$. And in the very first step of choosing amperian path, we have an extra term $$\mu_0\epsilon_0 \dfrac{d\phi_E}{dt}$$ which depends upon the second time derivative of current. So, why is this term ignored in the derivation.
That therm comes from the fact that in the vacumm $$i$$ needs to be seen as $$i_{tot} = i + \epsilon_o \frac{d \phi_E}{dt}$$ (for more in depht view you can search "displacement current" on wikipedia), which acconts for the current not being costant in time in the ampere's law (to be precise all the discussion is made around the 4th maxwell equation, which is basically the non-integral version of ampere's law).
$$$$\int B u_n d\Sigma = \mu_oi_{tot} = \mu_oi + \mu_o\epsilon_o \frac{d \phi_E}{dt}$$$$
Most of the time the displacement current can be considered negligible (and $$\epsilon_o$$ is very small in comparison to $$i$$) and you case this term can be ignored.
• yes the final result shoud be the same. But you can just not consider the displacement currenti from the beggining being 10^8 times smaller than $\mu_oi$ (you don't need to "demostrate" an approsimation like that, it's simply common sense) – lorenzo Baldessarini Feb 21 at 14:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9400041103363037, "perplexity": 224.62141488247514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00242.warc.gz"} |
https://www.cut-the-knot.org/do_you_know/Buratino1.shtml | # Golden Ratio in Square
Tran Quang Hung has posted on the CutTheKnotMath facebook page a simple construction of the golden ratio.
### Construction
In square $ABCD$ $M,N,P,Q$ are midpoints of the sides, as shown below; $F$ the center of the square. Then circle $(AP)$ on $AP$ as a diameter, cuts $NF$ (point $X$) in Golden Ratio. Chord $AX$ cuts $MF$ in Golden Ratio:
Then $\displaystyle\frac{FX}{NX}=\frac{MZ}{FZ}=\phi,$ the golden ratio.
### Proof
Assume, without loss of generality, that $AB=2.$ Then $AP=\sqrt{5}.$ Since $AP$ is a diameter of the circle, $E$ - the intersection of $AP$ and $NQ$ is its center, which shows that $EX=\frac{1}{2}\sqrt{5}$ and that $EF=\frac{1}{2}.$ It follows that $FX=\frac{1}{2}(\sqrt{5}-1)$ and $NX=1-FX=\frac{1}{2}(3-\sqrt{5}),$ with a consequence that $\displaystyle\frac{FX}{NX}=\frac{1+\sqrt{5}}{2}=\phi.$
Next, triangles $AQX$ and $ZFX$ are similar: $\displaystyle\frac{FZ}{AQ}=\frac{FX}{QX},$ implying
$\displaystyle FZ=\frac{FX}{FX+1}=\frac{\sqrt{5}-1}{\sqrt{5}+1}=NX.$
We thus also have $MZ=FX$ and $\displaystyle\frac{MZ}{FZ}=\frac{FX}{NX}=\phi.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499763250350952, "perplexity": 300.55423481297765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00566.warc.gz"} |
https://waseda.pure.elsevier.com/ja/publications/a-numerical-proof-algorithm-for-the-non-existence-of-solutions-to | # A numerical proof algorithm for the non-existence of solutions to elliptic boundary value problems
Kouta Sekine*, Mitsuhiro T. Nakao, Shin'ichi Oishi, Masahide Kashiwagi
*この研究の対応する著者
## 抄録
In 1988, M.T. Nakao developed an algorithm that was based on the fixed-point theorem on Sobolev spaces for the numerical proof of the existence of solutions to elliptic boundary value problems on a bounded domain with a Lipschitz boundary (Nakao (1988) [9]). Thereafter, many researchers reported that the numerical existence proof algorithm to elliptic boundary value problems is actually significant and sufficiently useful. However, the numerical proof of the non-existence of solutions to the problem has hitherto not been considered due to several challenges. The purpose of this paper is to solve these difficulties and to propose an algorithm for the numerical proof of the non-existence of solutions in a closed ball B¯H01(uˆ,ρ)={u∈H01(Ω)|‖u−uˆ‖H01≤ρ} to elliptic boundary value problems. We demonstrate some numerical examples that confirm the usefulness of the proposed algorithm.
本文言語 English 87-107 21 Applied Numerical Mathematics 169 https://doi.org/10.1016/j.apnum.2021.06.011 Published - 2021 11
• 数値解析
• 計算数学
• 応用数学
## フィンガープリント
「A numerical proof algorithm for the non-existence of solutions to elliptic boundary value problems」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707891941070557, "perplexity": 664.2986164923684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00559.warc.gz"} |
http://umj-old.imath.kiev.ua/volumes/issues/?lang=en&year=2016&number=11 | 2019
Том 71
№ 11
# Volume 68, № 11, 2016
Article (Russian)
### Laplacian with respect to the measure on a Riemannian manifold and the Dirichlet problem. II
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1443-1449
We propose the $L^2$ -version of Laplacian with respect to measure on an (infinite-dimensional) Riemannian manifold. The Dirichlet problem for equations with proposed Laplacian is solved in a part of the Rimannian manifold of a certain class.
Article (Ukrainian)
### Almost periodic solutions of systems with delay and nonfixed times of impulsive action
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1450-1466
We study the existence and asymptotic stability of piecewise continuous almost periodic solutions for systems of differential equations with delay and nonfixed times of impulsive action that can be regarded as mathematical models of neural networks.
Article (Ukrainian)
### Estimations of the Laplace – Stieltjes integrals
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1467-1482
We study the Laplace – Stieltjes integrals with an arbitrary abscissa of convergence. The lower and upper estimates for these integrals are established. The accumulated results are used to deduce the relationships between the growth of the integral and the maximum of the integrand.
Article (English)
### General proximal point algorithm for monotone operators
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1483-1492
We introduce a new general proximal point algorithm for an infinite family of monotone operators in a real Hilbert space. We establish strong convergence of the iterative process to a common zero point of the infinite family of monotone operators. Our result generalizes and improves numerous results in the available literature.
Article (Ukrainian)
### I. Approximative properties of biharmonic Poisson integrals in the classes $W^r_{\beta} H^{\alpha}$
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1493-1504
We deduce asymptotic equalities for the least upper bounds of approximations of functions from the classes $W^r_{\beta} H^{\alpha}$, and $H^{\alpha}$ by biharmonic Poisson integrals in the uniform metric.
Article (Ukrainian)
### Asymptotically independent estimators in a structural linear model with measurement errors
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1505-1517
We consider a structural linear regression model with measurement errors. A new parameterization is proposed, in which the expectation of the response variable plays the role of a new parameter instead of the intercept. This enables us to form three groups of asymptotically independent estimators in the case where the ratio of variances of the errors is known and two groups of this kind if the variance of the measurement error in the covariate is known. In this case, it is not assumed that the errors and the latent variable are normally distributed.
Article (Ukrainian)
### Sufficient conditions under which the solutions of general parabolic initial-boundaryvalue problems are classical
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1518-1527
We establish new sufficient conditions under which the generalized solutions of initial-boundary-value problems for the linear parabolic differential equations of any order with complex-valued coefficients are classical. These conditions are formulated in the terms of belonging of the right-hand sides of this problem to certain anisotropic H¨ormander spaces. In the definition of classical solution, its continuity on the line connecting the lateral surface with the base of the cylinder (in which the problem is considered) is not required.
Article (Ukrainian)
### Two-dimensional Coulomb dynamics of two and three equal negative charges in the field of two equal fixed positive charges
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1528-1539
Periodic and bounded for positive time solutions of the planar Coulomb equation of motion for two and three identical negative charges in the field of two equal fixed positive charges are found. The systems possess equilibrium configurations to which the found bounded solutions converge in the infinite time limit. The periodic solutions are obtained with the help of the Lyapunov center theorem.
Article (English)
### Hypersurfaces with nonzero constant Gauss – Kronecker curvature in $M^{n+1}(±1)$
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1540-1551
We study hypersurfaces in a unit sphere and in a hyperbolic space with nonzero constant Gauss – Kronecker curvature and two distinct principal curvatures one of which is simple. Denoting by $K$ the nonzero constant Gauss – Kronecker curvature of hypersurfaces, we obtain some characterizations of the Riemannian products $S^{n-1}(a) \times S^1(\sqrt{1 - a^2}),\quad$ $a^2 = 1/\left(1 + K^{\frac{2}{n - 2}}\right)$ or $S^{n-1}(a) \times H^1(- \sqrt{1 + a^2}),\quad$ $a^2 = 1/\left(K^{\frac{2}{n - 2}} - 1\right)$.
Article (English)
### A construction of regular semigroups with quasiideal regular *-transversals
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1552-1560
Let $S$ be a semigroup and let “$\ast$ ” be a unary operation on S satisfying the following identities: $$xx^{\ast} x = x, x^{\ast} xx^{\ast} = x^{\ast},\; x^{\ast \ast \ast} = x^{\ast},\; (xy^{\ast} )^{\ast} = y^{\ast \ast} x^{\ast},\; (x^{\ast} y)^{\ast} = y^{\ast} x^{\ast \ast}.$$ Then S\ast = \{ x\ast | x \in S\} is called a regular \ast -transversal of $S$ in the literatures. We propose a method for the construction of regular semigroups with quasiideal regular $\ast$ -transversals based on the use of fundamental regular semigroups and regular $\ast$ -semigroups.
Article (English)
### On the growth of meromorphic solutions of difference equation
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1561-1570
We estimate the order of growth of meromorphic solutions of some linear difference equations and study the relationship between the exponent of convergence of zeros and the order of growth of the entire solutions of linear difference equations.
Brief Communications (Ukrainian)
### Complete classification of finite semigroups for which the inverse monoid of local automorphisms is a permutable semigroup
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1571-1578
A semigroup $S$ is called permutable if $\rho \circ \sigma = \sigma \circ \rho$. for any pair of congruences $\rho, \sigma$ on $S$. A local automorphism of semigroup $S$ is defined as an isomorphism between two of its subsemigroups. The set of all local automorphisms of the semigroup $S$ with respect to an ordinary operation of composition of binary relations forms an inverse monoid of local automorphisms. We present a complete classification of finite semigroups for which the inverse monoid of local automorphisms is permutable.
Brief Communications (Ukrainian)
### On the equivalence of some perturbations of the operator of multiplication by the independent variable
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1579-1584
We study the conditions of equivalence of two operators obtained as perturbations of the operator of multiplication by the independent variable by certain Volterra operators in the space of functions analytic in an arbitrary domain of the complex plane starlike with respect to the origin. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065690040588379, "perplexity": 471.1141308089562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00191.warc.gz"} |
https://shreddercenter.blog/2018/04/28/fourier-fun/ | # Fourier Fun
[latexpage]
::
### Fourier Series
Allows us to expand any periodic funciton on the range $(-L,L)$ in terms of sinusoidal functions that are periodic on that interval
$f(x)=\sum _{ n=0 }^{ \infty }{ A_{ n }\cos (\frac { n\pi /*x }{ L } ) } +\sum _{ n=0 }^{ \infty }{ B_{ n }\sin (\frac { n\pi x }{ L } ) }$
• Recall Euler’s formula: $e^z = e^{s+it} = e^s e^{it} = e^{s}(\cos(t)+i\sin(t))$
• Euler’s identity gives $e^{i\pi} + 1 = 0$ (i.e., $e^{i/pi}=-1$)
• This is a consequence of Euler’s formula:
• $e^{ix} = \cos x + i \sin x$ ⇒ let $x = \pi$ $e^{i\pi} = \cos(\pi) + i\sin(\pi) = -1 + i(0) = -1$
• Since the sines and cosines can be combined into a complex exponential, we can use this equivalency to simplify $f(x)$ into a single term:
$f(x) = \sum_{n=-\infty}^{\infty}{a_n e^{in\pi x/L}}$
where $a_n=\frac{1}{2L} \int _{ -L}^{L}{f(x)e^{-in\pi x/L}dx}$
### Review of sine and cosine:
#### Cosine Function
• Even function ⇒ $\cos(-x) = cos(x)$ ⇒ symmetric
• $\int _{-\pi}^{\pi}{\cos(\theta)d\theta} = 0$
#### Sine Function
• Odd function ⇒ $\sin(-x) = -\sin(x)$ 1 ⇒ antisymmetric
• $\int _{-\pi}^{\pi}{\sin(\theta)d\theta} = 0$
### The Fourier Transform
Note: a majority of these equations I got from the first reference I listed
Let $g(t)$ be a function of time and $G(\omega)$ be a function of frequency
• Aside: $\omega \equiv 2\pi \upsilon$ where $\omega =$ angular frequency and $\upsilon =$ oscillation frequency
• Then, the FT of $g(t)$ (if it exists) is …
$G(\omega) = \mathcal{F} \{g(t) \} = \sqrt { \frac { |b| }{ (2\pi )^{ 1-a } } } \int _{ -\infty }^{ \infty }{g(t)e^{ib\omega t}dt}$
$g(t) = \mathcal{F}^{-1} \{g(t) \} = \mathcal{F} \{G(\omega ) \} = \sqrt { \frac { |b| }{ (2\pi )^{ 1+a } } } \int _{ -\infty }^{ \infty }{G(\omega)e^{-ib\omega t}d\omega}$
#### Some common parameter choices
• Physics and Mathematica default: $a = 0, b =1$
$G(\omega) = \sqrt { \frac { |1| }{ (2\pi )^{ 1-0 } } } \int _{ -\infty }^{ \infty }{g(t)e^{i(1)\omega t}dt} = \sqrt { \frac { 1 }{ (2\pi ) } } \int _{ -\infty }^{ \infty }{g(t)e^{i\omega t}dt}$
$G(\omega) = \sqrt { \frac { 1 }{ (2\pi ) } } \int _{ -\infty }^{ \infty }{g(t)e^{i\omega t}dt}$
• Pure mathematics and systems engineering: $a=1, b =-1$
$G(\omega) = \sqrt { \frac { |-1| }{ (2\pi )^{ 1-1 } } } \int _{ -\infty }^{ \infty }{g(t)e^{i(-1)\omega t}dt} = \sqrt { \frac { 1 }{ (2\pi )^0 } } \int _{ -\infty }^{ \infty }{g(t)e^{-i\omega t}dt} = \int _{ -\infty }^{ \infty }{g(t)e^{-i\omega t}dt}$
$G(\omega) = \int _{ -\infty }^{ \infty }{g(t)e^{-i\omega t}dt}$
• Classical physics: $a=-1, b=1$
$G(\omega) = \sqrt { \frac { |1| }{ (2\pi )^{ 1-(-1) } } } \int _{ -\infty }^{ \infty }{g(t)e^{i(1)\omega t}dt} = \sqrt { \frac { 1 }{ (2\pi )^2 } } \int _{ -\infty }^{ \infty }{g(t)e^{i\omega t}dt} = \frac{1}{2\pi }\int _{ -\infty }^{ \infty }{g(t)e^{-i\omega t}dt}$
$G(\omega) = \frac{1}{2\pi }\int _{ -\infty }^{ \infty }{g(t)e^{-i\omega t}dt}$
• Signal processing: $a = 0, b = -2\pi$
$G(\omega) = \sqrt { \frac { |-2\pi| }{ (2\pi )^{ 1-(0) } } } \int _{ -\infty }^{ \infty }{g(t)e^{i(-2\pi)\omega t}dt} = sqrt { \frac { 2\pi }{ 2\pi} } \int _{ -\infty }^{ \infty }{g(t)e^{-i2\pi \omega t}dt} = \int _{ -\infty }^{ \infty }{g(t)e^{-i2\pi \omega t}dt}$
$G(\omega) = \int _{ -\infty }^{ \infty }{g(t)e^{-i2\pi \omega t}dt}$
### Fourier Transform in Quantum Mechanics
Note: most of this material comes from the second link listed under my sources.
#### Conjugate pairs
• A conjugate pair is a pair of variables that are related to one another via the FT.
• Two conjugate pairs that exist in nature are:
1. Time ($t$) and frequency ($\upsilon$):
• $G(\upsilon) = \mathcal{F} \{ g(t) \} = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}{e^{-i\upsilon t}g(t)dt}$
• $g(t) = \mathcal{F} \{ G(\upsilon) \} = \mathcal{F}^{-1} \{ g(t) \} = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}{e^{i\upsilon t}G(\upsilon) d\upsilon }$
2. Position ($x$) and momentum ($\rho$):
• $\phi(\rho) = \mathcal{F} \{ \Psi(x) \} = \frac{1}{\sqrt{2\pi \hbar}} \int_{-\infty}^{\infty}{e^{-i\rho x/ \hbar} \Psi (x)dx}$
• $\Psi(x) = \mathcal{F} \{ \phi (\rho) \} = \mathcal{F}^{-1} \{ \Phi(x) \} = \frac{1}{\sqrt{2\pi \hbar}} \int_{-\infty}^{\infty}{e^{i\rho x/ \hbar} \phi (\rho) d\rho}$
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231658577919006, "perplexity": 2825.182352605657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00540.warc.gz"} |
http://mathhelpforum.com/calculus/36558-definitely-integral.html | # Math Help - Definitely Integral
1. ## Definitely Integral
To all Integral lovers,
In the Spirit of Integration....here is a popular integral
$\int\limits_0^{\frac {\pi }{4}} {\ln \left( {1 + \tan x} \right)dx}$
Well Enjoy,
Isomorphism
P.S: Elegant methods score can score a lot of 'thank you's
2. Originally Posted by ThePerfectHacker
I will wait for some different way of doing it. I have a solution that looks more elegant... but it is almost the same as you did....
3. OK i will post my solution since nobody else replied
$I = \int\limits_0^{\frac {\pi }{4}} {\ln \left( {1 + \tan x} \right)dx}$
$I = \int\limits_0^{\frac {\pi }{4}} {\ln \left( {1 + \tan \left(\frac{\pi}4 - x\right)} \right)dx}$
$I =\int\limits_0^{\frac {\pi }{4}} \ln \frac2{1 + \tan x} dx$
$I = \int\limits_0^{\frac {\pi }{4}}\ln 2 \,dx - I$
So $I = \frac{\pi}8 \ln2$
Part 2:
$\int^1_0\frac{x^4(1-x)^4}{1+x^2} dx$
This is a funny integral... not hard... but the answer is interesting
People who solve it can comment on the answer rather than showing the answer
4. Originally Posted by Isomorphism
OK i will post my solution since nobody else replied
$I = \int\limits_0^{\frac {\pi }{4}} {\ln \left( {1 + \tan x} \right)dx}$
$I = \int\limits_0^{\frac {\pi }{4}} {\ln \left( {1 + \tan \left(\frac{\pi}4 - x\right)} \right)dx}$
$I =\int\limits_0^{\frac {\pi }{4}} \ln \frac2{1 + \tan x} dx$
$I = \int\limits_0^{\frac {\pi }{4}}\ln 2 \,dx - I$
So $I = \frac{\pi}8 \ln2$
Part 2:
$\int^1_0\frac{x^4(1-x)^4}{1+x^2} dx$
This is a funny integral... not hard... but the answer is interesting
People who solve it can comment on the answer rather than showing the answer
Haha...but how close is that approximation to the real thing...is it less or more making this positive or negative haha
5. Originally Posted by Mathstud28
Haha...but how close is that approximation to the real thing...is it less or more making this positive or negative haha
Hmm actually this is used to approximate 'it' by the use of sandwiching the integral between two series :P
6. I thought of converting this thread into an "integral collection of my life". So I will be posting integrals here that seem nice to me. Anyone who thinks (s)he has a nice solution, can of course post them here... Thank you
The next one in the set:
Part 3:
$\int_0^\infty \frac{dx}{1+x^4}$
7. Originally Posted by Isomorphism
Part 3:
$\int_0^\infty \frac{dx}{1+x^4}$
If I remember correctly: $\int_0^{\infty} \frac{dx}{x^n+1} = \frac{(\pi/n)}{\sin (\pi/n)}$ where $n\geq 2$.
8. Originally Posted by ThePerfectHacker
If I remember correctly: $\int_0^{\infty} \frac{dx}{x^n+1} = \frac{(\pi/n)}{\sin (\pi/n)}$ where $n\geq 2$.
Can you prove that without complex analysis(that is without using residues)?
9. Originally Posted by Isomorphism
Can you prove that without complex analysis(that is without using residues)?
Do you mean by decomposition on the bottom or do you mean by using something else?
10. $\int_0^\infty \frac{dx}{1+x^4}$ can be solved without complex numbers. I used only pure substitution and algebraic tricks
P.S: Unfortunately what I did only holds for this problem . It does not work for TPH's generalization.
11. Originally Posted by Isomorphism
Can you prove that without complex analysis(that is without using residues)?
Combine Gamma & Beta functions plus the Euler Reflection Formula.
12. Originally Posted by Krizalid
Combine Gamma & Beta functions plus the Euler Reflection Formula.
huh?
13. Let $\int_{0}^{\infty }{\frac{1}{1+x^{n}}\,dx}.$ Make the substitution $z=x^n,$ hence
\begin{aligned}
\int_{0}^{\infty }{\frac{1}{1+x^{n}}\,dx}&=\frac{1}{n}\int_{0}^{\in fty }{\frac{z^{1/n-1}}{1+z}\,dz}\\
&=\frac{1}{n}\beta \left(\frac{1}{n},1-\frac{1}{n}\right)\\
&=\frac{1}{n}\cdot \frac{\Gamma \left( n^{-1}\right)\Gamma\left( 1-n^{-1} \right)}{\Gamma (1)}\\
\end{aligned}
Note aside: I've heard that the Reflection Formula can be only proved via complex analysis. It's a dream to see a solution with elementary tools.
14. Here is the latest update of "Definitely Integral"
Compute I:
$\boxed{I = \int_0^{\pi/4}\ln(\sin x) \cdot \ln(\cos x)\,dx}$
P.S:I think its quite hard
Page 1 of 2 12 Last | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716296195983887, "perplexity": 1880.0117071116263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929272.10/warc/CC-MAIN-20150521113209-00127-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-for-g-in-t-2pisqrt-l-g | Physics
Topics
# How do you solve for g in T=2pisqrt(L/g)?
Aug 10, 2016
$g = \frac{4 {\pi}^{2} L}{T} ^ 2$
#### Explanation:
$\textcolor{red}{\text{First,}}$ what you want to do is square both sides of the equation to get rid of the square root, since squaring a number is the inverse of taking the square root of a number:
${T}^{2} = {\left(2 \pi\right)}^{2} \sqrt{{\left(\frac{L}{g}\right)}^{2}}$
${T}^{2} = {\left(2 \pi\right)}^{2} \left(\frac{L}{g}\right)$
$\textcolor{b l u e}{\text{Second,}}$ we simplify the ${\left(2 \pi\right)}^{2}$ part so it's easier to read. Remember, when you square something in parenthesis, you are squaring every single term inside:
${T}^{2} = 4 {\pi}^{2} \left(\frac{L}{g}\right)$
$\textcolor{p u r p \le}{\text{third,}}$ multiply both sides by $g$:
$g \times {T}^{2} = 4 {\pi}^{2} \left(\frac{L}{\cancel{\text{g}}}\right) \times \cancel{g}$
$g \times {T}^{2} = 4 {\pi}^{2} L$
$\textcolor{m a r \infty n}{\text{finally,}}$ divide both sides by ${T}^{2}$ to get $g$ by itself:
$\frac{g \times {\cancel{T}}^{2}}{\cancel{T}} ^ 2 = \frac{4 {\pi}^{2} L}{T} ^ 2$
Thus, $g$ is equal to:
$g = \frac{4 {\pi}^{2} L}{T} ^ 2$
##### Impact of this question
72643 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481285572052002, "perplexity": 344.9295671715046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00571.warc.gz"} |
https://brilliant.org/discussions/thread/quadratics-and-their-graphs/ | ×
# A simple proof ?
If $$f(x)$$ is a quadratic equation with $$2$$ real roots, prove that the points where the graph of $$y=f(x)$$ cuts the $$x-$$ axis are equidistant to every point on the symmetric axis of $$f(x)$$.
Note by Karthik Venkata
2 years, 6 months ago
Sort by:
consider the function to be $F(x) = ax^2 + bx + c$ given that roots are real .... two cases arise
(1) * they are equal *
i.e they both lie on axis of symmetry and share the same point hence are equidistant.
(2) they are distinct
the formula for roots is given by $x = \frac{-b}{2a} + or - \frac{\sqrt{b^2-4ac}}{2a}$ ......... {1}
which shows that equal amounts of $\frac{\sqrt{b^2-4ac}}{2a}$ is added and subtracted from the axis of symmetry giving rise to the roots. {this tells that the x coordinates of roots are placed equally apart from axis of symmetry}
they lie on the same horizontal axis {implies y coordinate w.r.t point on axis of symmetry is same} hence roots are equidistant.
- 2 years, 5 months ago
You are really close to the actual proof, you just need to prove that the quadratic has its symmetric axis given by the plot of the equation $$x = \dfrac{-b}{2a}$$ in the $$xy$$ plane.
- 2 years, 5 months ago
oops i considered that to be true ........ we can write F(x) as $F(x) = (x+\frac{b}{2a} )^2 + \frac{b^2 - 4ac}{4 a^2}$ comparing this with the graph of $f(x) = x^2 + k$ which has symmetry at x=0 {the y axis} , after translation of $\frac{b}{2a}$ the symmetry shifts to $x=\frac{-b}{2a}$
- 2 years, 5 months ago
Well excellent proof Abhinav Raichur !
- 2 years, 5 months ago
thank you :)
- 2 years, 5 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536156058311462, "perplexity": 892.1241438201992}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00626.warc.gz"} |
https://wiki.math.ucr.edu/index.php/Implicit_Differentiation | # Implicit Differentiation
## Background
So far, you may only have differentiated functions written in the form ${\displaystyle y=f(x)}$. But some functions are better described by an equation involving ${\displaystyle x}$ and ${\displaystyle y}$. For example, ${\displaystyle x^{2}+y^{2}=16}$ describes the graph of a circle with center ${\displaystyle \left(0,0\right)}$ and radius 4, and is really the graph of two functions ${\displaystyle y=\pm {\sqrt {16-x^{2}}}}$, the upper and lower semicircles:
Sometimes, functions described by equations in ${\displaystyle x}$ and ${\displaystyle y}$ are too hard to solve for ${\displaystyle y}$, for example ${\displaystyle x^{3}+y^{3}=6xy}$. This equation really describes 3 different functions of x, whose graph is the curve:
We want to find derivatives of these functions without having to solve for ${\displaystyle y}$ explicitly. We do this by implicit differentiation. The process is to take the derivative of both sides of the given equation with respect to ${\displaystyle x}$, and then do some algebra steps to solve for ${\displaystyle y'}$ (or ${\displaystyle {\dfrac {dy}{dx}}}$ if you prefer), keeping in mind that ${\displaystyle y}$ is a function of ${\displaystyle x}$ throughout the equation.
## Warm-up exercises
Given that ${\displaystyle y}$ is a function of ${\displaystyle x}$, find the derivative of the following functions with respect to ${\displaystyle x}$.
1) ${\displaystyle y^{2}}$
Solution:
${\displaystyle 2yy'}$
Reason:
Think ${\displaystyle y=f(x)}$, and view it as ${\displaystyle (f(x))^{2}}$ to see that the derivative is ${\displaystyle 2f(x)\cdot f'(x)}$ by the chain rule, but write it as ${\displaystyle 2yy'}$.
2) ${\displaystyle xy}$
Solution:
${\displaystyle xy'+y}$
Reason:
${\displaystyle x}$ and ${\displaystyle y}$ are both functions of ${\displaystyle x}$ which are being multiplied together, so the product rule says it's ${\displaystyle x\cdot y'+y\cdot 1}$.
3) ${\displaystyle \cos y}$
Solution:
${\displaystyle -y'\sin y}$
Reason:
The function ${\displaystyle y}$ is inside of the cosine function, so the chain rule gives ${\displaystyle (-\sin y)\cdot y'=-y'\sin y}$.
4) ${\displaystyle {\sqrt {x+y}}}$
Solution:
${\displaystyle {\frac {1+y'}{2{\sqrt {x+y}}}}}$
Reason:
Write it as ${\displaystyle (x+y)^{\frac {1}{2}}}$, and use the chain rule to get ${\displaystyle {\frac {1}{2}}\left(x+y\right)^{-{\frac {1}{2}}}\cdot \left(1+y'\right)}$, then simplify.
## Exercise 1: Compute y'
Find ${\displaystyle y'}$ if ${\displaystyle \sin y-3x^{2}y=8}$.
Note the ${\displaystyle \sin y}$ term requires the chain rule, the ${\displaystyle 3x^{2}y}$ term needs the product rule, and the derivative of 8 is 0.
We get
${\displaystyle {\begin{array}{rcl}\sin y-3x^{2}y&=&8\\\left(\cos y\right)y'-\left(3x^{2}y'+6xy\right)&=&0\quad ({\text{derivative of both sides with respect to }}x)\\\left(\cos y\right)y'-3x^{2}y'&=&6xy\\\left(\cos y-3x^{2}\right)y'&=&6xy\\y'&=&{\dfrac {6xy}{\cos y-3x^{2}}}.\end{array}}}$
## Exercise 2: Find equation of tangent line
Find the equation of the tangent line to ${\displaystyle x^{2}+2xy-y^{2}+x=2}$ at the point ${\displaystyle \left(1,0\right)}$.
We first compute ${\displaystyle y'}$ by implicit differentiation.
${\displaystyle {\begin{array}{rcl}x^{2}+2xy-y^{2}+x&=&2\\2x+2xy'+2y-2yy'+1&=&0\\x+xy'+y-yy'+{\frac {1}{2}}&=&0\\xy'-yy'&=&-x-y-{\frac {1}{2}}\\(x-y)y'&=&-(x+y+{\frac {1}{2}})\\y'&=&-{\dfrac {x+y+{\frac {1}{2}}}{x-y}}\end{array}}}$
At the point ${\displaystyle \left(1,0\right)}$, we have ${\displaystyle x=1}$ and ${\displaystyle y=0}$. Plugging these into our equation for ${\displaystyle y'}$ gives
${\displaystyle {\begin{array}{rcl}y'&=&-{\dfrac {1+0+{\frac {1}{2}}}{1-0}}=-{\frac {3}{2}}.\\\end{array}}}$
This means the slope of the tangent line at ${\displaystyle \left(1,0\right)}$ is ${\displaystyle m=-{\frac {3}{2}}}$, and a point on this line is ${\displaystyle \left(1,0\right)}$. Using the point-slope form of a line, we get
${\displaystyle {\begin{array}{rcl}y-0&=&-{\frac {3}{2}}\left(x-1\right)\\\\y&=&-{\frac {3}{2}}x+{\frac {3}{2}}.\\\end{array}}}$
Here's a picture of the curve and tangent line:
## Exercise 3: Compute y"
Find ${\displaystyle y''}$ if ${\displaystyle ye^{y}=x}$.
Use implicit differentiation to find ${\displaystyle y'}$ first:
${\displaystyle {\begin{array}{rcl}ye^{y}&=&x\\ye^{y}y'+y'e^{y}&=&1\\y'\left(ye^{y}+e^{y}\right)&=&1\\y'&=&{\dfrac {1}{ye^{y}+e^{y}}}\\&=&\left(ye^{y}+e^{y}\right)^{-1}\end{array}}}$
Now ${\displaystyle y''}$ is just the derivative of ${\displaystyle \left(ye^{y}+e^{y}\right)^{-1}}$ with respect to ${\displaystyle x}$. This will require the chain rule. Notice we already found the derivative of ${\displaystyle ye^{y}}$ to be ${\displaystyle ye^{y}y'+y'e^{y}}$.
So
${\displaystyle {\begin{array}{rcl}y''&=&-1\left(ye^{y}+e^{y}\right)^{-2}\left(ye^{y}y'+y'e^{y}+e^{y}y'\right)\\\\&=&{\dfrac {-1}{\left(ye^{y}+e^{y}\right)^{2}}}\left(ye^{y}y'+2y'e^{y}\right)\\\\&=&-{\dfrac {y'e^{y}\left(y+2\right)}{\left(e^{y}\right)^{2}\left(y+1\right)^{2}}}\\\\&=&-{\dfrac {y'\left(y+2\right)}{e^{y}\left(y+1\right)^{2}}}\quad ({\text{since }}e^{y}\neq 0).\end{array}}}$
But we mustn't leave ${\displaystyle y'}$ in our final answer. So, plug ${\displaystyle y'={\dfrac {1}{e^{y}\left(y+1\right)}}}$ back in to get
${\displaystyle {\begin{array}{rcl}y''&=&-{\dfrac {{\frac {1}{e^{y}\left(y+1\right)}}\left(y+2\right)}{e^{y}\left(y+1\right)^{2}}}\\\\&=&-{\dfrac {y+2}{\left(e^{y}\right)^{2}\left(y+1\right)^{3}}}\end{array}}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 70, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666221141815186, "perplexity": 148.6002021586377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00361.warc.gz"} |
http://www.talkstats.com/threads/standard-deviation.4723/ | standard deviation
saige30
New Member
I have a problem I'm struggling to figure out how to compute....could someone walk me though the problem and explain how to figure out the answer? I'm really not even sure where to start on these types of problems. Thanks!
problem: The average sales representative for Insmed pharmaceutical company earns $124,000 in sales incentives with a standard deviation of$32,000. Assuming that the salaries are normally distributed, 90% of teh salaries will be above what dollar mark?
Last edited:
Dragan
Super Moderator
I have a problem I'm struggling to figure out how to compute....could someone walk me though the problem and explain how to figure out the answer? I'm really not even sure where to start on these types of problems. Thanks!
problem: The average sales representative for Insmed pharmaceutical company earns $124,000 in sales incentives with a standard deviation of$32,000. Assuming that the salaries are normally distributed, 90% of teh salaries will be above what dollar mark?
Use the Z-tranformation: Z = (X - Mu)/Sigma.
You know that Mu = 124, Sigma = 32, and from the Z-table Z = -1.28. Solving for X we obtain X = 32*(-1.28) + 124= your answer...(multiply your answer times 1000 to get the dollar amount).
saige30
z score
thanks so much...my answer is 90% of scores will be over $83,040. Is that correct? also- I knew equation already, but I wasn't sure how to find the zscore. I have the table in my book, but my professor didn't discuss how to use it. Can you offer any help there? THanks! Dragan Super Moderator I have the table in my book, but my professor didn't discuss how to use it. Can you offer any help there? THanks! Your question would be much easier for me to address if I had your textbook and then I could just (simply) show you....but I don't. How about this...Why not ask your professor to show you how to use the table(s) - since it seems to be the case that you're being required to solve these problems. saige30 New Member i would if only he had office hours from when he assigned the practice through tomorrow (our test) but he's out of the office thanks though for your help!! Mean Joe TS Contributor problem: The average sales representative for Insmed pharmaceutical company earns$124,000 in sales incentives with a standard deviation of \$32,000. Assuming that the salaries are normally distributed, 90% of teh salaries will be above what dollar mark?
You want the area under the curve = .90. This is a one-tail problem; the z-score corresponding to a tail area of .10 is z=1.282 BUT since we want the area ABOVE the value, we use z=-1.282
(Look at your insurance problem where they wanted to cover 85% BELOW the specific value and you used the positive z. Understand when to use positive z and when to use negative z? You won't get the right answer if you don't.)
Using the formula (for one-tail): z = (x - mean) / stdev, we can solve x=82,976. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764463663101196, "perplexity": 1109.3016399979963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00709.warc.gz"} |
https://www.physicsforums.com/threads/momentum-of-two-objects.138693/ | # Homework Help: Momentum of two objects
1. Oct 16, 2006
### map7s
Object 1 has a mass m1 and a velocity 1 = (2.78 m/s) in the x direction. Object 2 has a mass m2 and a velocity 2 = (3.21 m/s) in the y direction. The total momentum of these two objects has a magnitude of 16.8 kgm/s and points in a direction 66.5° above the positive x-axis. Find m1 and m2.
I tried doing p=mv for each one but it didn't work. I think that it has something to do with the resulting angle, but I don't know how to incorporate that into the equation.
2. Oct 16, 2006
### jamesrc
Write a little more about what you did. You should be trying to break up the total momentum into x and y components (using trigonometry). Then you can equate the momentum of object 1 with the component of the total momentum in the x-direction and the momentum of object 2 with the component of the total momentum in the y-direction.
3. Oct 16, 2006
All you have to do is solve the vector equation: $$\vec{p} = \vec{p}_{1} + \vec{p}_{2}$$, where p is the total momentum, and p1 and p2 are the momentum of masses 1 and 2, respectively. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920796275138855, "perplexity": 273.6046753342788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156423.25/warc/CC-MAIN-20180920081624-20180920101624-00169.warc.gz"} |
https://physics.stackexchange.com/questions/309789/dependence-of-fluid-pressure-being-on-height-and-not-mass | # Dependence of fluid pressure being on height and not mass
Fluid pressure is calculated as
$$\rho = hdg$$
where $\rho$ is fluid pressure, $h$ is depth from surface, $d$ is density of fluid and $g$ is acceleration of gravity.
It used to make sense to me for square or rectangle shaped containers since pressure means force per unit area.
$$Pressure = \frac{Force}{Area} = \frac{Fluid \space Weight}{Container's \space Bottom \space Area}$$
and
$$h \times d \times g= \frac{h\times m \times g}{v} = \frac{h\times m \times g}{h \times s} = \frac{m \times g}{s} = \frac{Fluid \space Weight}{Container's \space Bottom \space Area}$$
where $m$ is mass, $v$ is volume and $s$ is bottom area.
I thought that pressure being dependent on $h$ and not $m$ would be acceptable since $m$ is also dependent on $h$ since fluids can't go higher when they have spaces to their sides.
One big obvious mistake I made was to assume that all containers would be squares or rectangles. So, now I can't wrap my mind around the concept that pressures on tennis balls in the following image are equal.
Why are they equal when the weight (and mass) of fluid above them is obviously different?
• PS: I found a very similar question on the site before asking the question and I even took the image from there however, OP of that question asks something slightly different based on a thought experiment and it hasn't got an answer that answers my question so please don't mark as duplicate. – SarpSTA Feb 4 '17 at 23:33
• Pressure is weight over surface. Is this ratio really higher in the picture on the right? Why would that be the case? – Phoenix87 Feb 4 '17 at 23:41
• In the left diagram, you can think of the situation as a ball in a little cubical chamber being pressurized by a long, heavy piston with a small cross-sectional area (i.e., the narrow column of water). In the right diagram, you have the another ball in a cubical chamber but now the chamber is pressurized by an even heavier piston than the the one on the left AND this heavier piston also has a larger cross-sectional area than the piston on the left. So which chamber has a higher pressure? Sure, the piston on the right is pushing with more force, but that force is pushing over a larger area. – Samuel Weir Feb 4 '17 at 23:56
What you have ignored is the effect of the container walls.
The liquid is in equilibrium each particle has no net force on it.
Whatever force the liquid above a small region of the liquid exerts on that region a region of the liquid at the same horizontal level has the same force exerted by the container wall.
So the region of liquid has the same force on it irrespective of whether there is liquid or a container wall above it.
So your analysis failed to include the contribution to the force due to the container wall.
Make a hole in the wall and the force due to the wall ceases to exist and liquid comes out of the hole.
Let us consider the picture you provided(remove the tennis balls,they're useless here), and at a certain depth $h$ from the free surface of both tanks, connect them by another pipe as shown.
NOTE:both the free surfaces of the tanks are at the same distance from the pipe,which is horizontal.
If possible, let us assume that the pressures at the given depth $h$ is different in the tanks. Then water would flow from the right to the left(under the assumption that greater mass implies greater pressure). Also assume after transportation, no water is spilled over the brim of the tanks. Now, at equilibrium the horizontal pipe is at a distance of $h_1$ from the free surface of left tank and at $h_2$ from free surface of right tank($h_1 > h_2$). Consider a infinitesimally small fluid cube A of side $dh$ in the left tank, at the same level as the pipe. Let the force here be $\vec F = F_x + F_y + F_z$.
Clearly under equlibrium(taking z axis to be vertical):
$(F_x + dF_x)-F_x = 0$.............1)
$(F_y + dF_y) -F_y = 0$............2)
$(F_z + dF_z) - F_z = dV.\rho.g$.....3)[$dV = dA.dh$ is the volume of the fluid element.]
Clearly,from 1) and 2), we can see that $dP_x = \frac{dF_x}{dA} = 0$ and similarly, $dP_y = 0$
From 3), we have $dF_z = \rho.g.dA.dh$
or, integrating $P_z = \rho.g.h + C$ Clearly, $C = P_a$ (atmospheric pressure)
So, for the left tank, $P_z = \rho.g.h_1 + P_a$, while for the right tank it is
$P_z = \rho.g.h_2 + P_a$ [Assuming change in $P_a$ is negligible over $h_1 - h_2$.
But this means that the pressures are not equal at the same depth in the left and right tanks, and neither are the forces on the fluid elements. Thus, there will be further flow to attain equilibrium, net force on the fluid elements should be zero. This contradicts the assumption that equilibrium had been reached at $h_2$ and $h_1$. Hence, our primary assumption must also be false. i.e: there cannot be any water flow when we initially connect the pipes. Thus the pressures must also be equal at the same depth, whatever the shape of the tank. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040279984474182, "perplexity": 367.265151350152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00230.warc.gz"} |
https://math.stackexchange.com/questions/2315998/check-that-operator-is-linear-transformation-and-find-matrix-in-the-same-basis | Check that operator is linear transformation and find matrix in the same basis
Now I have:
$$\varphi\mathbf{x}=(x_2+x_3,2x_1+x_3,3x_1-x_2+x_3)$$
How do I check that this transformation is linear, and Also find its matrix?
As about matrix, the basis is not given, so it should be standard one: $$e_1=\langle1,0,0\rangle, e_2=\langle0,1,0\rangle, e_3=\langle0,0,1\rangle$$
As far as I know I have to put all the components into the matrix, like:
$$A =\begin{pmatrix}x_2+x_3 \\ 2x_1+x_3 \\ 3x_1-x_2+x_3 \end{pmatrix}$$
But I do not know how to proceed with given $e_{1,2,3}$ basis.
$$A_{\varphi} = \begin{pmatrix} 0 & 1 & 1 \\ 2 & 0 & 1 \\ 3 & -1 & 1 \end{pmatrix}$$
But asd I've said above, I do not know how to get the right result
• First tell what is the definition space and what the codomain...! And "to find its matrix"...with respect to what basis ? – DonAntonio Jun 9 '17 at 12:35
• @Bye_World Good...and I guess the space is $\;\Bbb R^3\;$ both as domain and codomain. Let us let him confirm. Poor worded questions cause problems... – DonAntonio Jun 9 '17 at 12:39
• @Bye_World And I don't see why you keep on intervening if the OP hasn't even said half a word, and I don't care who and why gave him the question: it is still a poorly worded one. Shall we wait until the OP address the questions or you intend to continue writing? – DonAntonio Jun 9 '17 at 12:41
On linearity
In general, to show that a function $T:V\to W$ between real vector spaces $V$ and $W$ is linear, you need to show that it is
• Homogeneous. I.e. for all $\mathbf v\in V$ and for all $k\in \Bbb R$, $T(k\mathbf v) = kT(\mathbf v)$.
• Additive. I.e. for all $\mathbf v_1, \mathbf v_2 \in V$, $T(\mathbf v_1+\mathbf v_2) = T(\mathbf v_1) + T(\mathbf v_2)$.
In this particular case, showing homogeneity means proving (or disproving) that for all real $k$: $$\varphi(k\mathbf x) = \varphi\big(k(x_1,x_2,x_3)\big) = k\varphi(\mathbf x)$$ and showing additivity means (dis)proving that $$\varphi(\mathbf x +\mathbf y) = \varphi\big((x_1,x_2,x_3)+(y_1,y_2,y_3)\big) = \varphi(\mathbf x) + \varphi(\mathbf y)$$
On the Matrix Representation
Hint:
$$A =\begin{pmatrix}x_2+x_3 \\ 2x_1+x_3 \\ 3x_1-x_2+x_3 \end{pmatrix} = \pmatrix{ 0x_1 + 1x_2 + 1x_3 \\ 2x_1+0x_2+1x_3 \\ 3x_1-1x_2+1x_3}$$
Does that help?
• Aha. it explains the second part of question, first one is still unclear, how to check if transformation is linear or not – M.Mass Jun 9 '17 at 12:43
• See if my edit helps. – user137731 Jun 9 '17 at 12:59
I suppose that the transformations acts on the vector space $\mathbb{R}^3$ (over $\mathbb{R}$). If so you can prove linearity showing that: $$\varphi(\mathbf x+a\mathbf y)=\varphi (\mathbf x)+a\varphi(\mathbf y) \quad \forall a\in\mathbb{R} \quad and \quad \forall \mathbf x,\mathbf y\in\mathbb{R}^3$$
This is simple using the definition and the properties of addition and multiplication in $\mathbb {R}$.
To show that the matrix that you have found represents the transformation simply verify that:
$$A_{\varphi}\mathbf x^T =[\varphi (\mathbf x)]^T$$ That is obvious $$\begin{pmatrix} 0 & 1 & 1 \\ 2 & 0 & 1 \\ 3 & -1 & 1 \end{pmatrix}\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix}=\begin{pmatrix} x_1+x_2 \\ 2x_1+x_3 \\ 3x_1-x_2+x_3 \end{pmatrix}$$
To find the matrix of a linear transformation $T$ with respect to the standard basis, first you have to know which space $T$ is going from (the "domain") and which space $T$ is sending vectors to (the "codomain"). Here you can see that the vectors in the domain have 3 components (they are $x_1,x_2,$ and $x_3$), and (if we're working with just real numbers, as is typical), and that the outputs have three components. So $T$ is a linear transformation from $\mathbb{R}^3$ to $\mathbb{R}^3$. What you do is you find $T(e_1)$, $T(e_2)$, and $T(e_3)$. Those appear as the columns of the matrix.
So in this example, $T(e_1)=(0,2,3)$ (you just plug in $x_1=1,x_2=0,x_3=0$). This goes in the first column. Then find $T(e_2)$. That goes in the 2nd column. Then find $T(e_3)$. That goes in the third column.
How to check something is a linear transformation:
Let's review what the term linear transformation means. If $V$ and $W$ are vector spaces, a linear transformation from $V$ to $W$ is a map $T$ from $V$ to $W$ such that for every two scalars $c$ and $d$, and every two vector $v_1$ and $v_2$ in $V$, $T(cv_1+dv_2)=cT(v_1)+dT(v_2)$.
(If you're taking linear algebra, it would be a good idea to memorize this.)
As Bye_World says, you can also break this down into two criteria. First, check that for every single scalar $c$ and every single vector $v$ in $V$, $T(cv)=cT(v)$. Then check that for every two vectors $v_1$ and $v_2$ in $V$, $T(v_1+v_2)=T(v_1)+T(v_2)$. I'm going to walk you down how I would do the first thing to check. In parentheses I'll write down my mental picture.
If $\textbf{v}=(v_1,v_2,v_3)$ and $c$ is any scalar, does $T(c\textbf{v})=cT(\textbf{v})$?
(I have to convert the thing on the left of the equals sign to the thing on the right. First I'll write $c\textbf{v}$ in coordinates:)
$c(v_1,v_2,v_3)=(cv_1,cv_2,cv_3)$.
(Now I need to plug that into $T$. They tell you what $T$ does, so I use that.)
$T(cv_1,cv_2,cv_3)=(cv_2+cv_3, 2cv_1+cv_3, 3cv_1-cv_2+cv_3)$.
(OK. Now I'm trying to check that this is equal to $cT(\textbf{v})$. Let me write that in coordinates.)
$cT(\textbf{v})=cT(v_1,v_2,v_3)=c(v_2+v_3,2v_1+v_3,3v_1-v_2+v_3) ... (So I rewrote the left hand side out and I rewrote the right hand side and I have to now check that I got the same thing. You can see that this is the same by distributing the$c$across)$=(cv_2+cv_3,2cv_1+cv_3,3cv_1-cv_2+cv_3)$. (Now conclude:) So$T(c\textbf{v})=cT(\textbf{v})$. That's how you do the first part of checking that something is a linear transformation. I'll let you do the second part. By the way, what, in words, is going on? Think of$T$as a function, except instead of taking numbers to other numbers (like$f(x)=x^2$), it takes vectors to other vectors. The condition that$T(c\textbf{v})=cT(\textbf{v})$can be visualized. If$\textbf{v}$was a purple vector, then$T(\textbf{v})$is where the purple vector goes. The statement says that, any scalar multiple of the purple vector goes to the corresponding scalar multiple of where the purple vector goes. So for instance, 2 times the purple vector has to go to 2 times where the purple vector goes, etc. I think some linear algebra students find this explanation helpful. • Technically we don't know that$\operatorname{dom} \varphi = \Bbb R^3$, it's just implied (which is why Don was getting all bent out of shape in the comments). It could very well be$\varphi(x_1,x_2,x_3,x_4)=\dots$where the map just doesn't use$x_4$. But good answer. +1 I like how you walked the reader through the proof that$T(cv) = cT(v)\$. Just a note on formatting, tho. If you add double dollar signs instead of single ones you get $$\text{Math Mode}$$ which can help break up your answer a little more so it doesn't look like such a wall of text. Not a big deal tho. As I said, good answer. – user137731 Jun 9 '17 at 13:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869960904121399, "perplexity": 268.40034969047787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00427.warc.gz"} |
https://sierrahash.com/don-t-tell-me-what-to-do-meme-2/ | # Don T Tell Me What To Do Meme
Hola
If you are looking for don t tell me what to do meme ? Then, this is the place where you can find some sources that provide detailed information.
## don t tell me what to do meme
I hope the above sources help you with the information related to don t tell me what to do meme . If not, reach through the comment section.
what
#### What Is The Lcm Of 3 And 8
Hello If you are looking for ? Then, this is the place where you can find some sources that provide detailed information. I hope the above sources help you with
what
#### What Is The Physics Primer?
Hello If you are looking for ? Then, this is the place where you can find some sources that provide detailed information. I hope the above sources help you with
what
#### What Is The Lcm Of 3 And 8
Hi If you are looking for ? Then, this is the place where you can find some sources that provide detailed information. I hope the above sources help you with | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387785196304321, "perplexity": 295.7101451855789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00777.warc.gz"} |
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&pubname=all&v1=14L05&startRec=1 | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(14L05) AND publication=(all) Sort order: Date Format: Standard display
Results: 1 to 30 of 66 found Go to page: 1 2 3
[1] S. Vostokov and V. Volkov. Explicit form of the Hilbert symbol for polynomial formal groups. St. Petersburg Math. J. 26 (2015) 785-796. Abstract, references, and article information View Article: PDF [2] Nathaniel Stapleton. Subgroups of $p$-divisible groups and centralizers in symmetric groups. Trans. Amer. Math. Soc. 367 (2015) 3733-3757. Abstract, references, and article information View Article: PDF [3] Jeffrey D. Achter. Irreducibility of Newton strata in ${GU}(1,n-1)$ Shimura varieties. Proc. Amer. Math. Soc. Ser. B 1 (2014) 79-88. Abstract, references, and article information View Article: PDF [4] Ching-Li Chai, Brian Conrad and Frans Oort. Complex Multiplication and Lifting Problems. Math. Surveys Monogr. 195 (2013) MR 3137398. Book volume table of contents [5] Haruzo Hida. Local indecomposability of Tate modules of non-CM abelian varieties with real multiplication. J. Amer. Math. Soc. 26 (2013) 853-877. Abstract, references, and article information View Article: PDF [6] Oleg Demchenko and Alexander Gurevich. Reciprocity laws through formal groups. Proc. Amer. Math. Soc. 141 (2013) 1591-1596. Abstract, references, and article information View Article: PDF [7] Peter Scholze. The Langlands-Kottwitz method and deformation spaces of $p$-divisible groups. J. Amer. Math. Soc. 26 (2013) 227-259. Abstract, references, and article information View Article: PDF [8] Eike Lau. Smoothness of the truncated display functor. J. Amer. Math. Soc. 26 (2013) 129-165. Abstract, references, and article information View Article: PDF [9] N. P. Strickland. Multicurves and equivariant cohomology. Memoirs of the AMS 213 (2011) MR 2856125. Book volume table of contents [10] Takeshi Torii. HKR characters, $p$-divisible groups and the generalized Chern character. Trans. Amer. Math. Soc. 362 (2010) 6159-6181. MR 2661512. Abstract, references, and article information View Article: PDF This article is available free of charge [11] Jeffrey D. Achter and Peter Norman. Local monodromy of $p$-divisible groups. Trans. Amer. Math. Soc. 362 (2010) 985-1007. MR 2551513. Abstract, references, and article information View Article: PDF This article is available free of charge [12] M. V. Bondarko. Classification of finite commutative group schemes over complete discrete valuation rings; the tangent space and semistable reduction of Abelian varieties. St. Petersburg Math. J. 18 (2007) 737-755. MR 2301041. Abstract, references, and article information View Article: PDF This article is available free of charge [13] M. V. Bondarko. Isogeny classes of formal groups over complete discrete valuation fields with arbitrary residue fields. St. Petersburg Math. J. 17 (2006) 975-988. MR 2202046. Abstract, references, and article information View Article: PDF This article is available free of charge [14] Tyler Lawson. Realizability of the Adams-Novikov spectral sequence for formal $A$-modules. Proc. Amer. Math. Soc. 135 (2007) 883-890. MR 2262886. Abstract, references, and article information View Article: PDF This article is available free of charge [15] Alexandru Buium. Arithmetic Differential Equations. Math. Surveys Monogr. 118 (2005) MR MR2166202. Book volume table of contents [16] Alexandru Buium. Flat correspondences. Math. Surveys Monogr. 118 (2005) 185-225. Book volume table of contents View Article: PDF [17] Alexandru Buium. Global theory. Math. Surveys Monogr. 118 (2005) 71-105. Book volume table of contents View Article: PDF [18] Alexandru Buium. Preliminaries from algebraic geometry. Math. Surveys Monogr. 118 (2005) 3-30. Book volume table of contents View Article: PDF [19] Alexandru Buium. Outline of $\delta$--geometry. Math. Surveys Monogr. 118 (2005) 31-67. Book volume table of contents View Article: PDF [20] Alexandru Buium. Birational theory. Math. Surveys Monogr. 118 (2005) 141-158. Book volume table of contents View Article: PDF [21] Alexandru Buium. Local theory. Math. Surveys Monogr. 118 (2005) 107-140. Book volume table of contents View Article: PDF [22] Alexandru Buium. Spherical correspondences. Math. Surveys Monogr. 118 (2005) 161-183. Book volume table of contents View Article: PDF [23] Alexandru Buium. Hyperbolic correspondences. Math. Surveys Monogr. 118 (2005) 227-297. Book volume table of contents View Article: PDF [24] Robert G. Underwood and Lindsay N. Childs. Duality for Hopf orders. Trans. Amer. Math. Soc. 358 (2006) 1117-1163. MR 2187648. Abstract, references, and article information View Article: PDF This article is available free of charge [25] Frans Oort. Foliations in moduli spaces of abelian varieties. J. Amer. Math. Soc. 17 (2004) 267-296. MR 2051612. Abstract, references, and article information View Article: PDF This article is available free of charge [26] Hirofumi Nakai and Douglas C. Ravenel. The first cohomology group of the generalized Morava stabilizer algebra. Proc. Amer. Math. Soc. 131 (2003) 1629-1639. MR 1950296. Abstract, references, and article information View Article: PDF This article is available free of charge [27] Lindsay N. Childs. Principal homogeneous spaces and formal groups. Math. Surveys Monogr. 80 (2000) 191-204. Book volume table of contents View Article: PDF [28] Lindsay N. Childs. Hopf algebras of rank $p^2$. Math. Surveys Monogr. 80 (2000) 149-159. Book volume table of contents View Article: PDF [29] Lindsay N. Childs. Formal groups. Math. Surveys Monogr. 80 (2000) 171-190. Book volume table of contents View Article: PDF [30] Lindsay N. Childs. Cyclic extensions of degree $p$. Math. Surveys Monogr. 80 (2000) 113-128. Book volume table of contents View Article: PDF
Results: 1 to 30 of 66 found Go to page: 1 2 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450991153717041, "perplexity": 4853.8511415992625}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00194-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://wikien3.appspot.com/wiki/Probability_amplitude | # Probability amplitude
A wave function for a single electron on 5d atomic orbital of a hydrogen atom. The solid body shows the places where the electron's probability density is above a certain value (here 0.02 nm−3): this is calculated from the probability amplitude. The hue on the colored surface shows the complex phase of the wave function.
In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. The modulus squared of this quantity represents a probability or probability density.
Probability amplitudes provide a relationship between the wave function (or, more generally, of a quantum state vector) of a system and the results of observations of that system, a link first proposed by Max Born. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. In fact, the properties of the space of wave functions were being used to make physical predictions (such as emissions from atoms being at certain discrete energies) before any physical interpretation of a particular function was offered. Born was awarded half of the 1954 Nobel Prize in Physics for this understanding (see References), and the probability thus calculated is sometimes called the "Born probability". These probabilistic concepts, namely the probability density and quantum measurements, were vigorously contested at the time by the original physicists working on the theory, such as Schrödinger and Einstein. It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics—topics that continue to be debated even today.
## Overview
### Physical
Neglecting some technical complexities, the problem of quantum measurement is the behaviour of a quantum state, for which the value of the observable Q to be measured is uncertain. Such a state is thought to be a coherent superposition of the observable's eigenstates, states on which the value of the observable is uniquely defined, for different possible values of the observable.
When a measurement of Q is made, the system (under the Copenhagen interpretation) jumps to one of the eigenstates, returning the eigenvalue to which the state belongs. The superposition of states can give them unequal "weights". Intuitively it is clear that eigenstates with heavier "weights" are more "likely" to be produced. Indeed, which of the above eigenstates the system jumps to is given by a probabilistic law: the probability of the system jumping to the state is proportional to the absolute value of the corresponding numerical factor squared. These numerical factors are called probability amplitudes, and this relationship used to calculate probabilities from given pure quantum states (such as wave functions) is called the Born rule.
Different observables may define incompatible decompositions of states.[clarification needed] Observables that do not commute define probability amplitudes on different sets.
### Mathematical
In a formal setup, any system in quantum mechanics is described by a state, which is a vector |Ψ⟩, residing in an abstract complex vector space, called a Hilbert space. It may be either infinite- or finite-dimensional. A usual presentation of that Hilbert space is a special function space, called L2(X), on certain set X, that is either some configuration space or a discrete set.
For a measurable function ${\displaystyle \psi }$, the condition ${\displaystyle \psi \in L^{2}(X)}$ specifies that a finitely bounded integral must apply:
${\displaystyle \int \limits _{X}|\psi (x)|^{2}\,\mathrm {d} \mu (x)<\infty ;}$
this integral defines the square of the norm of ψ. If that norm is equal to 1, then
${\displaystyle \int \limits _{X}|\psi (x)|^{2}\,\mathrm {d} \mu (x)=1.}$
It actually means that any element of L2(X) of the norm 1 defines a probability measure on X and a non-negative real expression |ψ(x)|2 defines its Radon–Nikodym derivative with respect to the standard measure μ.
If the standard measure μ on X is non-atomic, such as the Lebesgue measure on the real line, or on three-dimensional space, or similar measures on manifolds, then a real-valued function |ψ(x)|2 is called a probability density; see details below. If the standard measure on X consists of atoms only (we shall call such sets X discrete), and specifies the measure of any xX equal to 1,[1] then an integral over X is simply a sum[2] and |ψ(x)|2 defines the value of the probability measure on the set {x}, in other words, the probability that the quantum system is in the state x. How amplitudes and the vector are related can be understood with the standard basis of L2(X), elements of which will be denoted by |x or x| (see bra–ket notation for the angle bracket notation). In this basis
${\displaystyle \psi (x)=\langle x|\Psi \rangle }$
specifies the coordinate presentation of an abstract vector |Ψ⟩.
Mathematically, many L2 presentations of the system's Hilbert space can exist. We shall consider not an arbitrary one, but a convenient one for the observable Q in question. A convenient configuration space X is such that each point x produces some unique value of Q. For discrete X it means that all elements of the standard basis are eigenvectors of Q. In other words, Q shall be diagonal in that basis. Then ${\displaystyle \psi (x)}$ is the "probability amplitude" for the eigenstate x|. If it corresponds to a non-degenerate eigenvalue of Q, then ${\displaystyle |\psi (x)|^{2}}$ gives the probability of the corresponding value of Q for the initial state |Ψ⟩.
For non-discrete X there may not be such states as x| in L2(X), but the decomposition is in some sense possible; see spectral theory and Spectral theorem for accurate explanation.
## Wave functions and probabilities
If the configuration space X is continuous (something like the real line or Euclidean space, see above), then there are no valid quantum states corresponding to particular xX, and the probability that the system is "in the state x" will always be zero. An archetypical example of this is the L2(R) space constructed with 1-dimensional Lebesgue measure; it is used to study a motion in one dimension. This presentation of the infinite-dimensional Hilbert space corresponds to the spectral decomposition of the coordinate operator: x| Q | Ψ⟩ = xx | Ψ⟩, xR in this example. Although there are no such vectors as x |, strictly speaking, the expression x | Ψ⟩ can be made meaningful, for instance, with spectral theory.
Generally, it is the case when the motion of a particle is described in the position space, where the corresponding probability amplitude function ψ is the wave function.
If the function ψL2(X), ‖ψ‖ = 1 represents the quantum state vector |Ψ⟩, then the real expression |ψ(x)|2, that depends on x, forms a probability density function of the given state. The difference of a density function from simply a numerical probability means that one should integrate this modulus-squared function over some (small) domains in X to obtain probability values – as was stated above, the system can't be in some state x with a positive probability. It gives to both amplitude and density function a physical dimension, unlike a dimensionless probability. For example, for a 3-dimensional wave function, the amplitude has the dimension [L−3/2], where L is length.
Note that for both continuous and infinite discrete cases not every measurable, or even smooth function (i.e. a possible wave function) defines an element of L2(X); see #Normalisation below.
## Discrete amplitudes
When the set X is discrete (see above), vectors |Ψ⟩ represented with the Hilbert space L2(X) are just column vectors composed of "amplitudes" and indexed by X. These are sometimes referred to as wave functions of a discrete variable xX. Discrete dynamical variables are used in such problems as a particle in an idealized reflective box and quantum harmonic oscillator. Components of the vector will be denoted by ψ(x) for uniformity with the previous case; there may be either finite of infinite number of components depending on the Hilbert space. In this case, if the vector |Ψ⟩ has the norm 1, then |ψ(x)|2 is just the probability that the quantum system resides in the state x. It defines a discrete probability distribution on X.
|ψ(x)| = 1 if and only if |x is the same quantum state as |Ψ⟩. ψ(x) = 0 if and only if |x and |Ψ⟩ are orthogonal (see inner product space). Otherwise the modulus of ψ(x) is between 0 and 1.
A discrete probability amplitude may be considered as a fundamental frequency[citation needed] in the Probability Frequency domain (spherical harmonics) for the purposes of simplifying M-theory transformation calculations.
## A basic example
Take the simplest meaningful example of the discrete case: a quantum system that can be in two possible states: for example, the polarization of a photon. When the polarization is measured, it could be the horizontal state | H ⟩, or the vertical state | V ⟩. Until its polarization is measured the photon can be in a superposition of both these states, so its state |ψ could be written as:
${\displaystyle |\psi \rangle =\alpha |H\rangle +\beta |V\rangle ,\,}$
The probability amplitudes of |ψ for the states | H ⟩ and | V ⟩ are α and β respectively. When the photon's polarization is measured, the resulting state is either horizontal or vertical. But in a random experiment, the probability of being horizontally polarized is α2, and the probability of being vertically polarized is β2.
Therefore, a photon in a state ${\displaystyle |\psi \rangle ={\sqrt {1 \over 3}}|H\rangle -i{\sqrt {2 \over 3}}|V\rangle }$ would have a probability of 1/3 to come out horizontally polarized, and a probability of 2/3 to come out vertically polarized when an ensemble of measurements are made. The order of such results, is, however, completely random.
## Normalization
In the example above, the measurement must give either | H ⟩ or | V ⟩, so the total probability of measuring | H ⟩ or | V ⟩ must be 1. This leads to a constraint that α2 + β2 = 1; more generally the sum of the squared moduli of the probability amplitudes of all the possible states is equal to one. If to understand "all the possible states" as an orthonormal basis, that makes sense in the discrete case, then this condition is the same as the norm-1 condition explained above.
One can always divide any non-zero element of a Hilbert space by its norm and obtain a normalized state vector. Not every wave function belongs to the Hilbert space L2(X), though. Wave functions that fulfill this constraint are called normalizable.
The Schrödinger wave equation, describing states of quantum particles, has solutions that describe a system and determine precisely how the state changes with time. Suppose a wavefunction ψ0(x, t) is a solution of the wave equation, giving a description of the particle (position x, for time t). If the wavefunction is square integrable, i.e.
${\displaystyle \int _{\mathbf {R} ^{n}}|\psi _{0}(\mathbf {x} ,t_{0})|^{2}\,\mathrm {d\mathbf {x} } =a^{2}<\infty }$
for some t0, then ψ = ψ0/a is called the normalized wavefunction. Under the standard Copenhagen interpretation, the normalized wavefunction gives probability amplitudes for the position of the particle. Hence, at a given time t0, ρ(x) = |ψ(x, t0)|2 is the probability density function of the particle's position. Thus the probability that the particle is in the volume V at t0 is
${\displaystyle \mathbf {P} (V)=\int _{V}\rho (\mathbf {x} )\,\mathrm {d\mathbf {x} } =\int _{V}|\psi (\mathbf {x} ,t_{0})|^{2}\,\mathrm {d\mathbf {x} } .}$
Note that if any solution ψ0 to the wave equation is normalisable at some time t0, then the ψ defined above is always normalised, so that
${\displaystyle \rho _{t}(\mathbf {x} )=\left|\psi (\mathbf {x} ,t)\right|^{2}=\left|{\frac {\psi _{0}(\mathbf {x} ,t)}{a}}\right|^{2}}$
is always a probability density function for all t. This is key to understanding the importance of this interpretation, because for a given the particle's constant mass, initial ψ(x, 0) and the potential, the Schrödinger equation fully determines subsequent wavefunction, and the above then gives probabilities of locations of the particle at all subsequent times.
## The laws of calculating probabilities of events
A. Provided a system evolves naturally (which under the Copenhagen interpretation means that the system is not subjected to measurement), the following laws apply:
1. The probability (or the density of probability in position/momentum space) of an event to occur is the square of the absolute value of the probability amplitude for the event: ${\displaystyle P=|\phi |^{2}}$.
2. If there are several mutually exclusive, indistinguishable alternatives in which an event might occur (or, in realistic interpretations of wavefunction, several wavefunctions exist for a space-time event), the probability amplitudes of all these possibilities add to give the probability amplitude for that event: ${\displaystyle \phi =\sum _{i}\phi _{i};P=|\phi |^{2}=\left|\sum _{i}\phi _{i}\right|^{2}}$.
3. If, for any alternative, there is a succession of sub-events, then the probability amplitude for that alternative is the product of the probability amplitude for each sub-event: ${\displaystyle \phi _{APB}=\phi _{AP}\phi _{PB}}$.
4. Non-entangled states of a composite quantum system have amplitudes equal to the product of the amplitudes of the states of constituent systems: ${\displaystyle \phi _{\rm {system}}(\alpha ,\beta ,\gamma ,\delta ,\ldots )=\phi _{1}(\alpha )\phi _{2}(\beta )\phi _{3}(\gamma )\phi _{4}(\delta )\ldots }$. See the #Composite systems section for more information.
Law 2 is analogous to the addition law of probability, only the probability being substituted by the probability amplitude. Similarly, Law 4 is analogous to the multiplication law of probability for independent events; note that it fails for entangled states.
B. When an experiment is performed to decide between the several alternatives, the same laws hold true for the corresponding probabilities: ${\displaystyle P=\sum _{i}|\phi _{i}|^{2}}$.
Provided one knows the probability amplitudes for events associated with an experiment, the above laws provide a complete description of quantum systems in terms of probabilities.
The above laws give way to the path integral formulation of quantum mechanics, in the formalism developed by the celebrated theoretical physicist Richard Feynman. This approach to quantum mechanics forms the stepping-stone to the path integral approach to quantum field theory.
## In the context of the double-slit experiment
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described above. For example, in the classic double-slit experiment, electrons are fired randomly at two slits, and the probability distribution of detecting electrons at all parts on a large screen placed behind the slits, is questioned. An intuitive answer is that P(through either slit) = P(through first slit) + P(through second slit), where P(event) is the probability of that event. This is obvious if one assumes that an electron passes through either slit. When nature does not have a way to distinguish which slit the electron has gone through (a much more stringent condition than simply "it is not observed"), the observed probability distribution on the screen reflects the interference pattern that is common with light waves. If one assumes the above law to be true, then this pattern cannot be explained. The particles cannot be said to go through either slit and the simple explanation does not work. The correct explanation is, however, by the association of probability amplitudes to each event. This is an example of the case A as described in the previous article. The complex amplitudes which represent the electron passing each slit (ψfirst and ψsecond) follow the law of precisely the form expected: ψtotal = ψfirst + ψsecond. This is the principle of quantum superposition. The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex:
${\displaystyle P=|\psi _{\rm {first}}+\psi _{\rm {second}}|^{2}=|\psi _{\rm {first}}|^{2}+|\psi _{\rm {second}}|^{2}+2|\psi _{\rm {first}}||\psi _{\rm {second}}|\cos(\varphi _{1}-\varphi _{2}).}$
Here, ${\displaystyle \varphi _{1}}$ and${\displaystyle \varphi _{2}}$ are the arguments of ψfirst and ψsecond respectively. A purely real formulation has too few dimensions to describe the system's state when superposition is taken into account. That is, without the arguments of the amplitudes, we cannot describe the phase-dependent interference. The crucial term ${\displaystyle 2|\psi _{\rm {first}}||\psi _{\rm {second}}|\cos(\varphi _{1}-\varphi _{2})}$ is called the "interference term", and this would be missing if we had added the probabilities.
However, one may choose to devise an experiment in which the experimenter observes which slit each electron goes through. Then case B of the above article applies, and the interference pattern is not observed on the screen.
One may go further in devising an experiment in which the experimenter gets rid of this "which-path information" by a "quantum eraser". Then, according to the Copenhagen interpretation, the case A applies again and the interference pattern is restored.[3]
## Conservation of probabilities and the continuity equation
Intuitively, since a normalised wave function stays normalised while evolving according to the wave equation, there will be a relationship between the change in the probability density of the particle's position and the change in the amplitude at these positions.
Define the probability current (or flux) j as
${\displaystyle \mathbf {j} ={\hbar \over m}{1 \over {2i}}\left(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*}\right)={\hbar \over m}\operatorname {Im} \left(\psi ^{*}\nabla \psi \right),}$
measured in units of (probability)/(area × time).
Then the current satisfies the equation
${\displaystyle \nabla \cdot \mathbf {j} +{\partial \over \partial t}|\psi |^{2}=0.}$
The probability density is ${\displaystyle \rho =|\psi |^{2}}$, this equation is exactly the continuity equation, appearing in many situations in physics where we need to describe the local conservation of quantities. The best example is in classical electrodynamics, where j corresponds to current density corresponding to electric charge, and the density is the charge-density. The corresponding continuity equation describes the local conservation of charges.[clarification needed]
## Composite systems
For two quantum systems with spaces L2(X1) and L2(X2) and given states 1 and 2 respectively, their combined state 12 can be expressed as ψ1(x1) ψ2(x2) a function on X1×X2, that gives the product of respective probability measures. In other words, amplitudes of a non-entangled composite state are products of original amplitudes, and respective observables on the systems 1 and 2 behave on these states as independent random variables. This strengthens the probabilistic interpretation explicated above.
## Amplitudes in operators
The concept of amplitudes described above is relevant to quantum state vectors. It is also used in the context of unitary operators that are important in the scattering theory, notably in the form of S-matrices. Whereas moduli of vector components squared, for a given vector, give a fixed probability distribution, moduli of matrix elements squared are interpreted as transition probabilities just as in a random process. Like a finite-dimensional unit vector specifies a finite probability distribution, a finite-dimensional unitary matrix specifies transition probabilities between a finite number of states. Note that columns of a unitary matrix, as vectors, have the norm 1.
The "transitional" interpretation may be applied to L2s on non-discrete spaces as well. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 24, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855579733848572, "perplexity": 407.4474812789907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528490.48/warc/CC-MAIN-20191210180555-20191210204555-00472.warc.gz"} |
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=13C05&jrnl=one&onejrnl=tran | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(13C05) AND publication=(tran) Sort order: Date Format: Standard display
Results: 1 to 23 of 23 found Go to page: 1
[1] Jan Šťovíček and David Pospíšil. On compactly generated torsion pairs and the classification of co-$t$-structures for commutative noetherian rings. Trans. Amer. Math. Soc. 368 (2016) 6325-6361. Abstract, references, and article information View Article: PDF [2] Steven V Sam and Andrew Snowden. GL-equivariant modules over polynomial rings in infinitely many variables. Trans. Amer. Math. Soc. 368 (2016) 1097-1158. Abstract, references, and article information View Article: PDF [3] Lidia Angeleri Hügel, David Pospíšil, Jan Šťovíček and Jan Trlifaj. Tilting, cotilting, and spectra of commutative noetherian rings. Trans. Amer. Math. Soc. 366 (2014) 3487-3517. Abstract, references, and article information View Article: PDF [4] Peter Vámos and Sylvia Wiegand. Block diagonalization and $2$-unit sums of matrices over Prüfer domains. Trans. Amer. Math. Soc. 363 (2011) 4997-5020. MR 2806699. Abstract, references, and article information View Article: PDF [5] Mark Hovey. Erratum to Classifying subcategories of modules''. Trans. Amer. Math. Soc. 360 (2008) 2809-2809. MR 2373334. Abstract, references, and article information View Article: PDF This article is available free of charge [6] Lidia Angeleri Hügel, Silvana Bazzoni and Dolors Herbera. A solution to the Baer splitting problem. Trans. Amer. Math. Soc. 360 (2008) 2409-2421. MR 2373319. Abstract, references, and article information View Article: PDF This article is available free of charge [7] Wolfgang Hassler, Ryan Karr, Lee Klingler and Roger Wiegand. Indecomposable modules of large rank over Cohen-Macaulay local rings. Trans. Amer. Math. Soc. 360 (2008) 1391-1406. MR 2357700. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Hung Le Pham. The kernels of radical homomorphisms and intersections of prime ideals. Trans. Amer. Math. Soc. 360 (2008) 1057-1088. MR 2346483. Abstract, references, and article information View Article: PDF This article is available free of charge [9] Robert G. Underwood and Lindsay N. Childs. Duality for Hopf orders. Trans. Amer. Math. Soc. 358 (2006) 1117-1163. MR 2187648. Abstract, references, and article information View Article: PDF This article is available free of charge [10] David Eisenbud and Jerzy Weyman. Fitting's Lemma for $\mathbb{Z}/2$-graded modules. Trans. Amer. Math. Soc. 355 (2003) 4451-4473. MR 1990758. Abstract, references, and article information View Article: PDF This article is available free of charge [11] Mark Hovey. Classifying subcategories of modules. Trans. Amer. Math. Soc. 353 (2001) 3181-3191. MR 1828603. Abstract, references, and article information View Article: PDF This article is available free of charge [12] Steve Files and Rüdiger Göbel. Representations over PID's with three distinguished submodules. Trans. Amer. Math. Soc. 352 (2000) 2407-2427. MR 1491863. Abstract, references, and article information View Article: PDF This article is available free of charge [13] Mihai Cipu, Jürgen Herzog and Dorin Popescu. Indecomposable generalized Cohen-Macaulay modules . Trans. Amer. Math. Soc. 342 (1994) 107-136. MR 1104198. Abstract, references, and article information View Article: PDF This article is available free of charge [14] Andrew R. Kustin. Classification of the Tor-algebras of codimension four almost complete intersections . Trans. Amer. Math. Soc. 339 (1993) 61-85. MR 1132435. Abstract, references, and article information View Article: PDF This article is available free of charge [15] Winfried Bruns, Aron Simis and Ngô Viêt Trung. Blow-up of straightening-closed ideals in ordinal Hodge algebras . Trans. Amer. Math. Soc. 326 (1991) 507-528. MR 1005076. Abstract, references, and article information View Article: PDF This article is available free of charge [16] Dorin Popescu. Indecomposable Cohen-Macaulay modules and their multiplicities . Trans. Amer. Math. Soc. 323 (1991) 369-387. MR 979959. Abstract, references, and article information View Article: PDF This article is available free of charge [17] J. P. Brennan, M. V. Pinto and W. V. Vasconcelos. The Jacobian module of a Lie algebra . Trans. Amer. Math. Soc. 321 (1990) 183-196. MR 958883. Abstract, references, and article information View Article: PDF This article is available free of charge [18] Bernd Ulrich. Sums of linked ideals . Trans. Amer. Math. Soc. 318 (1990) 1-42. MR 964902. Abstract, references, and article information View Article: PDF This article is available free of charge [19] Mutsumi Amasaki. Application of the generalized Weierstrass preparation theorem to the study of homogeneous ideals . Trans. Amer. Math. Soc. 317 (1990) 1-43. MR 992603. Abstract, references, and article information View Article: PDF This article is available free of charge [20] Ernst Dieterich and Alfred Wiedemann. The Auslander-Reiten quiver of a simple curve singularity . Trans. Amer. Math. Soc. 294 (1986) 455-475. MR 825715. Abstract, references, and article information View Article: PDF This article is available free of charge [21] E. Graham Evans and Phillip Griffith. Filtering cohomology and lifting vector bundles . Trans. Amer. Math. Soc. 289 (1985) 321-332. MR 779066. Abstract, references, and article information View Article: PDF This article is available free of charge [22] R. Douglas Williams. Primary ideals in rings of analytic functions . Trans. Amer. Math. Soc. 177 (1973) 37-49. MR 0320760. Abstract, references, and article information View Article: PDF This article is available free of charge [23] Jack Ohm and David E. Rush. The finiteness of $I$ when ${\it R}[{\it X}]/{\it I}$ is flat . Trans. Amer. Math. Soc. 171 (1972) 377-408. MR 0306176. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 23 of 23 found Go to page: 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647622466087341, "perplexity": 2465.1678682645643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660864.21/warc/CC-MAIN-20160924173740-00264-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://brilliant.org/problems/is-it-straight-forward/ | Is it straight forward?
Algebra Level 5
Given complex numbers $$z_1,z_2$$ satisfying:
• $$|z_1|=5,|z_2|=13$$
• $$39z_1-15z_2=7(4+7i)$$,
If $$z_1z_2=a+bi$$ with real numbers $$a,b$$, find $$2a+b$$
Notation: $$|z|$$ denotes the absolute value of complex number $$z$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500218033790588, "perplexity": 2676.2739151796422}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428325.70/warc/CC-MAIN-20170727162531-20170727182531-00149.warc.gz"} |
https://www.research.ed.ac.uk/portal/en/publications/measurements-of-bjet-tagging-efficiency-with-the-atlas-detector-using--toverlinet--events-at--sqrts13--tev(3c17b38e-7eca-42df-8475-533d5a6c6ffe).html | Measurements of b-jet tagging efficiency with the ATLAS detector using $t\overline{t}$ events at $\sqrt{s}=13$ TeV
Research output: Contribution to journalArticle
Open
Documents
Original language English 089 Journal of High Energy Physics 1808 https://doi.org/10.1007/JHEP08(2018)089 Published - 16 Aug 2018
Abstract
The efficiency to identify jets containing $b$-hadrons ($b$-jets) is measured using a high purity sample of dileptonic top quark-antiquark pairs ($t\bar{t}$) selected from the 36.1 fb$^{-1}$ of data collected by the ATLAS detector in 2015 and 2016 from proton-proton collisions produced by the Large Hadron Collider at a centre-of-mass energy $\sqrt{s}=13$ TeV. Two methods are used to extract the efficiency from $t\bar{t}$ events, a combinatorial likelihood approach and a tag-and-probe method. A boosted decision tree, not using $b$-tagging information, is used to select events in which two $b$-jets are present, which reduces the dominant uncertainty in the modelling of the flavour of the jets. The efficiency is extracted for jets in a transverse momentum range from 20 to 300 GeV, with data-to-simulation scale factors calculated by comparing the efficiency measured using collision data to that predicted by the simulation. The two methods give compatible results, and achieve a similar level of precision, measuring data-to-simulation scale factors close to unity with uncertainties ranging from 2% to 12% depending on the jet transverse momentum.
• hep-ex | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475839734077454, "perplexity": 1599.958335674406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00373.warc.gz"} |
http://mathforum.org/mathimages/index.php?title=Fibonacci_Numbers&oldid=14709 | # Fibonacci Numbers
Fibonacci Spiral
The spiral curve of the Nautilus sea shell follows the pattern of a spiral drawn in a Fibonacci rectangle, a collection of squares with sides that have the length of Fibonacci numbers.
# Basic Description
The Fibonacci sequence is the sequence $1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \ldots,$ where the first two numbers are 1s and every later number is the sum of the two previous numbers. So, given two $1$'s as the first two terms, the next terms of the sequence follows as : $1+1=2, 1+2=3, 2+3=5, 3+5=8, \dots$
Image 1
The Fibonacci numbers can be discovered in nature, such as the spiral of the Nautilus sea shell, the petals of the flowers, the seed head of a sunflower, and many other parts. The seeds at the head of the sunflower, for instance, are arranged so that one can find a collection of spirals in both clockwise and counterclockwise ways. Different patterns of spirals are formed depending on whether one is looking at a clockwise or counterclockwise way; thus, the number of spirals also differ depending on the counting direction, as shown by Image 1. The two numbers of spirals are always consecutive numbers in the Fibonacci sequence.
Nature prefers this way of arranging seeds because it seems to allow the seeds to be uniformly distributed. For more information about Fibonacci patterns in nature, see Fibonacci Numbers in Nature
## Origin
The Fibonacci sequence was studied by Leonardo of Pisa, or Fibonacci (1770-1240). In his work Liber Abacci, he introduced a problem involving the growth of the rabbit population. The assumptions were
• there is one pair of baby rabbits placed in an enclosed place on the first day of January
• this pair will grow for one month before reproducing and produce a new pair of baby rabbits on the first day of March
• each new pair will mature for one month and produce a new pair of rabbits on the first day of their third month
• the rabbits never die, so after they mature, the rabbits produce a new pair of baby rabbits every month.
The problem was to find out how many pairs of rabbits there will be after one year.
Image 2
On January 1st, there is only 1 pair. On February 1st, this baby rabbits matured to be grown up rabbits, but they have not reproduced, so there will only be the original pair present.
Now look at any later month. June is a good example. As you can see in Image 2, all 5 pairs of rabbits that were alive in May continue to be alive in June. Furthermore, there are 3 new pairs of rabbits born in June, one for each pair that was alive in April (and are therefore old enough to reproduce in June).
This means that on June 1st, there are 5 + 3 = 8 pairs of rabbits. This same reasoning can be applied to any month, March or later, so the number of rabbits pairs in any month is the same as the sum of the number of rabbit pairs in the two previous months.
This is exactly the rule that defines the Fibonacci sequence. As you can see in the image, the population by month begins: 1, 1, 2, 3, 5, 8, ..., which is the same as the beginning of the Fibonacci sequence. The population continues to match the Fibonacci sequence no matter how many months out you go.
An interesting fact is that this problem of rabbit population was not intended to explain the Fibonacci numbers. This problem was originally intended to introduce the Hindu-Arabic numerals to Western Europe, where people were still using Roman numerals, and to help people practice addition. It was coincidence that the number of rabbits followed a certain pattern which people later named as the Fibonacci sequence.
# Fibonacci Numbers in Nature
### Leaf Arrangement
Fibonacci numbers appear in the arrangement of leaves in certain plants. Take a plant, locate the lowest leaf and number that leaf as 0. Number the leaves by order of creation starting from 0, as shown in Image 3. Then, count the number of leaves you encounter until you reach the next leaf that is directly above and pointing in the same direction as the lowest leaf, which is the leaf with number 8 in this image. The number of leaves you pass, in this case, 8, will be a Fibonacci number.
Image 3
Moreover, the number of rotations you make around the stem until you reach that leaf will also be a Fibonacci number. You make rotations up the stem by following ascending order of the leaf's number. In the image, if you follow the red arrows, the number of rotations you make until you reach 8 will be 5, which is a Fibonacci number.
In Image 4, the leaf that is pointing in the same direction as the lowest leaf 0 is the leaf number 13. The number of leaves in between these two leaves is 13, which is a Fibonacci number. Moreover, going up the stem in a clockwise direction, such that we follow leaves 0, 1, 2, ..., 13, we make 8 rotations, and going up the stem in a counterclockwise direction, we make 5 rotations. The number of clockwise rotations and the number of counterclockwise rotations are always consecutive Fibonacci numbers.
Image 4
### Spirals
Image 5
Fibonacci numbers can be seen in nature through spiral forms that can be constructed by Fibonacci rectangles as shown in Image 5. Fibonacci rectangles are rectangles that are built so that the ratio of the length to the width is the proportion of two consecutive Fibonacci numbers.
We can build Fibonacci rectangles first by drawing two squares with length 1 next to each other. Then, we draw a new square with length 2 that is touching the sides of the original two squares. We draw another square with length 3 that is touching one unit square and the latest square with length 2. We can build Fibonacci rectangles by continuing to draw new squares that have the same length as the sum of the length of the latest two squares.
After building Fibonacci rectangles, we can draw a spiral in the squares, each square containing a quarter of a circle. Such spiral is called the Fibonacci spiral, and it can be seen in sea shells, snails, the spirals of the galaxy, and other parts of nature, as shown in Image 6 and Image 7.
Image 6
Image 7
### Ancestry of Bees
Fibonacci numbers also appear when studying the ancestry of bees. Bees reproduce according to the following rules:
• male bees hatch from an unfertilized egg, and have only a mother and no father,
• female bees hatch from a fertilized egg, and require both a mother and a father.
The table below starts with a male bee, and tracks the ancestors of the male bee. Only one female was needed to produce the male bee. This female bee, on the other hand, must have had both a mother and a father to be hatched; thus, the third row of the bee family tree has one male and a female.
For each male and female, such pattern repeats. When we count the number of bees for each generation, we get a Fibonacci sequence as we go up the generations, similar to the way we got Fibonacci numbers in the rabbit population problem.
Image 8
# A More Mathematical Explanation
## Symbolic Definition of Fibonacci Sequence
The Fibonacci sequence is the sequence UNIQ1f30f6d873 [...]
## Symbolic Definition of Fibonacci Sequence
The Fibonacci sequence is the sequence $F_1, F_2, F_3, \ldots, F_n, \ldots$ where
$F_n = F_{n-1} + F_{n-2} \quad \hbox{ for } n>2$,
and
$F_1 = 1,\ F_2 = 1$.
The Fibonacci sequence is recursively defined because each term is defined in terms of its two immediately preceding terms.
## Identities and Properties
### Idenitities
There are some interesting identities, including formula for the sum of first $n$ Fibonacci numbers, the sum of Fibonacci numbers with odd indices and sum of Fibonacci number with even indices. Note that all the identities and properties in this section can be proven in a more rigorous way through mathematical induction.
#### Sum of first $n$ Fibonacci numbers
The sum of first$n$ Fibonacci numbers is one less than the value of the ${(n+2)}^{\rm th}$ Fibonacci number:
Eq. (1) $F_1+F_2+\dots+F_n=F_{n+2}-1$
For example, the sum of first $5$ Fibonacci numbers is :
$F_1+F_2+F_3+F_4+F_5= 1 + 1 + 2 + 3 +5=F_7-1=12$
The example is demonstrated below. The total length of red bars that each correspond to $F_1, F_2, F_3, F_4, F_5$ is one unit less than the length of $F_7$.
Image 9
$F_1=F_3-F_2$
$F_2=F_4-F_3$
$F_3=F_5-F_4$
$\dots$
$F_{n-1}=F_{n+1}-F_n$
$F_n=F_{n+2}-F_{n+1}$
Adding up all the equations, we get :
$F_1+F_2+\dots+F_n=-F_2+(F_3-F_3)+(F_4-F_4)+ \dots +(F_{n+1}-F_{n+1})+F_{n+2}$
$=F_{n+2}-F_2$
Except for $F_{n+2}$ and $-F_2$, all terms on the right side of the equation is canceled out by another term that has the opposite sign and the same magnitude. Because $F_2=1$, we get :
$F_1+F_2+\dots+F_n=F_{n+2}-1$
#### Sum of Fibonacci numbers with odd indices
The sum of first $n$ Fibonacci numbers with odd indices is equal to the ${(2n)}^{\rm th}$ Fibonacci number:
Eq. (2) $F_1+F_3+F_5+\dots+F_{2n-1}=F_{2n}$
For instance, the sum of first $4$ Fibonacci numbers with odd indices is:
$F_1+F_3+F_5+F_7=1+2+5+13=21=F_8$
This example is shown below.
Image 10
$F_1=F_2$
$F_3=F_4-F_2$
$F_5=F_6-F_4$
$\dots$
$F_{2n-1}=F_{2n}-F_{2n-2}$
Adding all the equations, we get :
$F_1+F_3+F_5+\dots+F_{2n-1}=(F_2-F_2)+(F_4-F_4)+(F_6-F_6)+\dots+(F_{2n-2}-F_{2n-2})+F_{2n}$
$=F_{2n}$
Except for $F_{2n}$, all the terms on the right side of the equation disappear because each term is canceled out by another term that has the opposite sign and the same magnitude.
#### Sum of Fibonacci numbers with even indices
The sum of first $n$ Fibonacci numbers with even indices is one less than the ${(2n+1)}^{\rm th}$ Fibonacci number:
$F_2+F_4+\dots+F_{2n}=F_{2n+1}-1$
For example, the sum of first $4$ Fibonacci numbers with even indices is :
$F_2+F_4+F_6= 1+3+8=F_7-1=13-1=12$
This example is shown below.
Image 11
To see the proof, click below.
Subtracting Eq. (2), the sum of Fibonacci numbers with odd indices, from the sum of the first $2n$ Fibonacci numbers, we get the identity of the sum of Fibonacci numbers with even indices.
First, when we find the sum of first $2n$ Fibonacci numbers through Eq. (1), we get:
$F_1+F_2+\dots+F_{2n}=F_{2n+2}-1$
Now, subtract Eq. (2) from the above equation, and we get:
$F_2+F_4+F_6+\dots+F_{2n}=F_{2n+2}-F_{2n}-1$
By definition of Fibonacci numbers, $F_{2n+2}-F_{2n}=F_{2n+1}$. Thus,
$F_2+F_4+F_6+\dots+F_{2n}=F_{2n+1}-1$
#### Sum of the squares of Fibonacci numbers
The sum of the squares of the first $n$ Fibonacci numbers is the product of the $n^{\rm th}$ and the ${(n+1)}^{\rm th}$ Fibonacci numbers.
Image 12
$\sum_{i=1}^n {F_i}^2=F_n F_{n+1}$
This identity can be proved by studying the area of the rectangles in Image 12.
The rectangle is called a Fibonacci rectangle, which is further described in Fibonacci Numbers in Nature. The numbers inside each square indicate the length of one side of the square. Notice that the lengths of the squares are all Fibonacci numbers.
Any rectangle in the picture is composed of squares with lengths that are Fibonacci numbers. In fact, any rectangle is composed of every square with side lengths $F_1$ through $F_n$, with the value of $n$ depending on the rectangle. Moreover, the dimension of this rectangle is $F_n$ by $F_{n+1}$.
With this information in mind, we can prove the identity $\sum_{i=1}^n {F_i}^2=F_n F_{n+1}$ by computing the area of the rectangle in two different ways. The first way of finding the area is to add the area of each squares. That is, the area of the rectangle will be :
${F_1}^2+{F_2}^2+{F_3}^2+\dots+{F_n}^2$.
Another way of computing the area is by multiplying the width by the height. Using this method, the area will be :
$F_n F_{n+1}$.
Because we are computing the area of the same rectangle, the two methods should give the same results. Thus,
${F_1}^2+{F_2}^2+{F_3}^2+\dots+{F_n}^2=F_n F_{n+1}$.
For example, for the red rectangle, the width is $5$ and the height is $8$. Since $5$ is the $5^{\rm th}$ Fibonacci number and $8$ is the $6^{\rm th}$ Fibonacci number, let
$n=5$.
The area of the rectangle is :
$1^2+1^2+2^2+3^2+5^2={F_1}^2+{F_2}^2+{F_3}^2+{F_4}^2+{F_5}^2=\sum_{i=1}^5 {F_i}^2=40$,
or
$5 * 8 = F_5 F_{5+1} = F_5 F_6 = 40$.
Thus,
$\sum_{i=1}^5 {F_i}^2=F_5 F_{5+1}$.
### Properties
#### Greatest Common Divisor
The greatest common divisor of two Fibonacci numbers is the Fibonacci number whose index is the greatest common divisor of the indices of the original two Fibonacci numbers. In other words,
$\gcd(F_n,F_m) = F_{\gcd(n,m)}$.
For instance,
$\gcd(F_9,F_6)=\gcd(34,8)=2=F_3=F_{\gcd(9,6)}$.
In a special case where $F_n$ and $F_m$ are consecutive Fibonacci numbers, this property says that
$\gcd(F_n, F_{n+1})=F_{\gcd(n,n+1)}=F_1=1$.
That is, $F_n$ and $F_{n+1}$, or two consecutive Fibonacci numbersare always relatively prime.
To see the proof for this special case, click below.
Assume that $F_n$ and $F_{n+1}$ have some integer $k$ as their common divisor. Then, both $F_{n+1}$ and $F_n$ are each multiples of $k$:
Eq. (3) $F_{n+1}=ka$
Eq. (4) $F_n=kb$
Subtracting Eq. (4) from Eq. (3), we get :
$F_{n-1}=k(a-b)$,
which means that if two consecutive Fibonacci numbers, $F_n$ and $F_{n+1}$, have $k$ as their common divisor, then the previous Fibonacci number, $F_{n-1}$ must also be a multiple of $k$. In that case, $F_{n-1}$ and $F_n$, which are also two consecutive Fibonacci numbers, will have $k$ as a common divisor. Then, it follows that $F_{n-2}$ must also be a multiple of $k$. Repeating the subtraction of consecutive Fibonacci numbers, we can conclude that the very first Fibonacci number, $F_1 = 1$ must also be a multiple of $k$. So $k=1$, and the only common divisor between two consecutive Fibonacci numbers is 1. Thus, two consecutive Fibonacci numbers are relatively prime.
#### Finite Difference of Fibonacci Numbers
One of the interesting properties of Fibonacci numbers is that the sequence of differences between consecutive Fibonacci numbers also forms a Fibonacci sequence, as shown in the table below. For more information about the difference table, click Difference Tables.
Because the first sequence of differences of the Fibonacci sequence also includes a Fibonacci sequence, the second difference also includes a Fibonacci sequence. The Fibonacci sequence is thus reproduced in every sequence of differences.
We can see that the sequence of differences is composed of Fibonacci numbers by looking at the definition of Fibonacci numbers :
$F_n = F_{n-1} + F_{n-2}$.
The difference between two consecutive Fibonacci numbers is :
$F_n - F_{n-1} = F_{n-2}$.
Thus, the difference between two consecutive Fibonacci numbers, $F_n$ and $F_{n-1}$, is equal to the value of the previous Fibonacci number, $F_{n-2}$.
## Golden Ratio
Image 13
The golden ratio appears in paintings, architecture, and in various forms of nature. Two numbers are said to be in the golden ratio if the ratio of the smaller number to the larger number is equal to the ratio of the larger number to the sum of the two numbers. In Image 13, the width of A and B are in the golden ratio if$a : b = (a+b) : a$.
The golden ratio is represented by the Greek lowercase phi ,$\varphi$, and the exact value is
$\varphi=\frac{1 + \sqrt{5}}{2} \approx 1.61803\,39887\dots\,$
This value can be found from the definition of the golden ratio. To see an algebraic derivation of the exact value of the golden ratio, go to Golden Ratio : An Algebraic Representation.
An interesting fact about golden ratio is that the ratio of two consecutive Fibonacci numbers approaches the golden ratio as the numbers get larger, as shown by the table below.
$\frac{F_{n+1}}{F_n}$ $\frac{1}{1}$=1 $\frac{2}{1}$=2 $\frac{3}{2}$=1.5 $\frac{5}{3}$=1.66667 $\frac{8}{5}$=1.6 $\frac{13}{8}$=1.625 $\frac{21}{13}$=1.61538 $\frac{34}{21}$=1.61904 $\frac{55}{34}$=1.61765 $\frac{89}{55}$=1.61818
Lets assume that the ratio of two consecutive Fibonacci numbers have a limit and verify that this limit is, in fact, the golden ratio. Let $r_n$ denote the ratio of two consecutive Fibonacci numbers, that is,
$r_n=\frac{F_{n+1}}{F_n}$.
Then,
$r_{n-1}=\frac{F_n}{F_{n-1}}$.
$r_n$ and $r_{n-1}$ are related by :
$r_n=\frac{F_{n+1}}{F_n}=\frac{F_n+F_{n-1}}{F_n}=1+\frac{F_{n-1}}{F_n}=1+\cfrac{1}{{F_n}/{F_{n-1}}}=1+\frac{1}{r_{n-1}}$.
Assuming that the ratio $r_n$ has a limit, let $r$ be that limit:
$\lim_{n \to \infty} r_n=\lim_{n \to \infty}\frac{F_{n+1}}{F_n}=r$.
Then,
$\lim_{n \to \infty} r_n = \lim_{n \to \infty} r_{n-1} = r$.
Taking the limit of $r_n=1+\frac{1}{r_{n-1}}$ we get :
$r=1+\frac{1}{r}$
Multiplying both sides by $r$, we get
Eq. (5) ${r}^2=r+1$
which can be written as:
$r^2 - r - 1 = 0$.
Applying the quadratic formula , we get $r = \frac{1 \pm \sqrt{5}} {2}$.
Because the ratio has to be a positive value,
$r=\frac{1 + \sqrt{5}}{2}$
which is the golden ratio. Thus, if $r_n$ has a limit, then this limit is the golden ratio. That is, as we go farther out in the sequence, the ratio of two consecutive Fibonacci numbers approaches the golden ratio. In fact, it can be proved that $r_n$ does have a limit; one way is to use Binet's formula in the next section. For a different proof using infinite continued fraction go to Continued Fraction Representation and Fibonacci Sequences
Image 14
Many people find the golden ratio in various parts of nature, art, architecture, and even music. However, there are some people who criticize this viewpoint. They claim that many mathematicians are wishfully trying to make a connection between the golden ratio and other parts of the world even though there is no real connection.
One example of the golden ratio that mathematicians found in nature is the human body. According to many, an ideal human body have proportions that show the golden ratio, such as:
• distance between the foot and navel : distance between the navel and the head
• distance between the finger tip and the elbow : distance between the wrist and the elbow
• distance between the shoulder line and top of the head : length of the head.
Leonardo da Vinci's drawing Vitruvian man shown in Image 14 emphasizes the proportion of human body. This drawing shows the proportions of an ideal human body that was studied by a Roman architect Vitruvius in his book De Architectura. In the drawing, a man is simultaneously inscribed in a circle and a square. The ratio of the square side to the radius of the circle in the drawing reflects the golden ratio, although the drawing deviates from the real value of the golden ratio by 1.7 percent. The proportions of the body of the man is also known to show the golden ratio.
Although people later found the golden ratio in the painting, there is no evidence whether Leonardo da Vinci was trying to show the golden ratio in his painting or not. For more information about the golden ratio, go to Golden Ratio
## Binet's Formula for Fibonacci Numbers
Binet's Formula gives a formula for the $n^{\rm th}$ Fibonacci number as :
$F_n=\frac{{\varphi}^n-{\bar{\varphi}}^n}{\sqrt5}$,
where $\varphi$ and $\bar{\varphi}$ are the two roots of Eq. (5), that is,
$\varphi=\frac{1 + \sqrt{5}}{2},\quad \bar{\varphi}=\frac{1-\sqrt{5}}{2}$.
Here is one way of verifying Binet's formula through mathematical induction, but it gives no clue about how to discover the formula. Let
$F_n=\frac{{\varphi}^n-{\bar{\varphi}}^n}{\sqrt5}$
as defined above. We want to verify Binet's formula by showing that the definition of Fibonacci numbers holds true even when we use Binet's formula. First, we will show through inductive step that:
$F_n=F_{n-1}+F_{n-2}\quad\hbox{ for } n>2$
and then we will show the base case that:
$F_1=1,\quad F_2=1$.
First, according to Binet's fromula,
$F_{n-1}+F_{n-2} = \frac{{\varphi}^{n-1}-{\bar{\varphi}}^{n-1}}{\sqrt5}+ \frac{{\varphi}^{n-2}-{\bar{\varphi}}^{n-2}}{\sqrt5}$
$=\frac{({\varphi}^{n-1}+{\varphi}^{n-2})-({\bar{\varphi}}^{n-1}+{\bar{\varphi}}^{n-2})}{\sqrt5}$
$=\frac{({\varphi}+1){\varphi}^{n-2}-(\bar{\varphi}+1){\bar{\varphi}}^{n-2}}{\sqrt5}$.
Because $\varphi$ and $\bar{\varphi}$ are the two roots of Eq. (5), the above equation becomes :
$F_{n-1}+F_{n-2}=\frac{{{\varphi}^2}{\varphi}^{n-2}-{{\bar{\varphi}}^2}{\bar{\varphi}}^{n-2}}{\sqrt5}$
$=\frac{{\varphi}^n-{\bar{\varphi}}^n}{\sqrt5}$
$=F_n$, as desired.
Now, because $\varphi=\frac{1 + \sqrt{5}}{2},\quad \bar{\varphi}=\frac{1-\sqrt{5}}{2}$,
$F_1=\frac{\varphi-\bar{\varphi}}{\sqrt5}=\frac{1}{\sqrt5}\left (\frac{1 + \sqrt{5}}{2}-\frac{1-\sqrt{5}}{2}\right)=\frac{1}{\sqrt5} {\sqrt5} = 1$
$F_2=\frac{{\varphi}^2-{\bar{\varphi}}^2}{\sqrt5}=\frac{(\varphi+\bar{\varphi})(\varphi-\bar{\varphi})}{\sqrt5}=\frac{1*{\sqrt5}}{\sqrt5}=1$.
Binet's formula thus is a correct formula of Fibonacci numbers.
## Fibonacci Numbers and Fractals
### Fibonacci Numbers and the Mandelbrot Set
Image 15
The Mandelbrot set is a set of points in which the boundary forms a fractal. It is a set of all complex numbers $c$ for which the sequence
Eq. (6) $z_{n+1}=(z_n)^2+c \quad \hbox{ for } n=0,1,2,\dots$
does not go to infinity, starting with $z_0=0$.
For instance, $c=0$ is included in the Mandelbrot set because
$z_1=(z_0)^2+c=0^2+0 = 0$
$z_2=(z_0)^2+c=0^2+0=0$
$\dots$
$z_n=0^2+0=0$ for any $n$.
Thus, the sequence defined by $c=0$ is bounded and $0$ is included in the Mandelbrot set.
On the other hand, when we test$c=1$,
Image 16
$z_1=(z_0)^2+c=1$
$z_2=(z_1)^2+c=2$
$z_3=(z_2)^2+c=5$
$\dots$
The terms of this sequence will increase to infinity. Thus, $c=1$ is not included in the Mandelbrot set.
Image 17
People have been drawn to study the Mandelbrot set because of its aesthetic beauty. The Mandelbrot set is known to be one of the most beautiful and complicated illustration of fractal. It is surprising to many people how a simple formula like Eq. (6) can generate a complex structure of the Mandelbrot set. The Fibonacci sequence is related to the Mandelbrot set through the period of the main cardioid and some large primary bulbs. For each bulb, there are many antennas, and the largest antenna is called the main antenna. The number of spokes in the main antenna is the period of the bulb.
The period of the main cardioid is considered to be 1. In Image 17, the main antenna has five spokes, including the one connecting the primary bulb and the junction point of the antenna. The period of this bulb is five.
Now, we will consider the period of the largest primary bulbs that are attached to the main cardioid and are in between two larger bulbs. In Image 18, the largest bulb between the bulb of period 1 and the bulb of period 2 is the bulb of period 3, and this bulb was found by looking for the largest bulb on the periphery of the main cardioid. The largest bulb between the bulb of period 2 and period 3 is the bulb of period 5, and the one between bulb of period 3 and period 5 is the bulb of period 8. The sequence generated in this way proceeds as 1, 2, 3, 5, 8, 13, ..., following the pattern of Fibonacci sequence.
Image 18
# References
Maurer, Stephen B & Ralston, Anthony. (2004) Discrete Algorithmic Mathematics. Massachusetts : A K Peters.
Posamentier, Alfred S & Lehmann Ingmar. (2007) The Fabulous Fibonacci Numbers. New York : Prometheus Books.
Vorb'ev, N. N. (1961) Fibonacci Numbers. New York : Blaisdell Publishing Company.
Hoggatt, Verner E., Jr. (1969) Fibonacci and Lucas Numbers. Boston : Houghton Mifflin Company.
Knott, Ron. (n.d.). The Fibonacci Numbers and Golden Section in Nature. Retrieved from http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibnat.html
Fibonacci Numbers in Nature & the Golden Ratio. (n.d.). In World-Mysteries.com. Retrieved from http://www.world-mysteries.com/sci_17.htm
## Things to add(possible ideas for future)
• Fibonacci numbers and Pascal's triangle
• A helper page for recursively defined sequence
• A section describing the Fibonacci numbers with negative subscripts. this appears in Finite Difference of Fibonacci Numbers section
• A derivation of the exact value of the golden ratio. The derivation is redundant with the information in the golden ratio page.
[[Category:]] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 194, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758423924446106, "perplexity": 351.8843645465105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597485.94/warc/CC-MAIN-20171217191117-20171217213117-00775.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.