url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://tex.stackexchange.com/questions/26434/where-is-the-matrix-command/26435
# Where is the \matrix command? From what I can see, \matrix was a TeX command, but I cannot seem to find documentation on it. It works in MathJax, so I wonder if it can be used in LaTeX. ie this is valid MathJax: $$\left[ \matrix { newx.x&newy.x&newz.x \\ newx.y&newy.y&newz.y \\ newx.z&newy.z&newz.z } \right]$$ As can be seen on math.stackexchange. In my LaTeX editor (which uses MikTex underneath), I have to use \begin{matrix} .. \end{matrix}, so I'm wondering what happened to the \matrix command. - \usepackage{amsmath} – Seamus Aug 24 '11 at 20:35 I would upvote purely for the doom avatar. But I've reached the limit cap. Wait, is that doom or wolfenstein? – Seamus Aug 24 '11 at 20:44 @Seamus: doom and I upped for you :) – percusse Aug 24 '11 at 20:51 Doom is 18 years old: 1993! OH wow. Makes me feel old... – Seamus Aug 24 '11 at 21:32 MathJaX is not a reliable guide as to what is available in "standard" LaTeX. It "loads" several packages (or, rather, simulates loading several packages) that are considered "standard" (in that many mathematicians use them). – Andrew Stacey Aug 25 '11 at 7:23 In addition to some already provided, here are a number of ways of creating matrices in LaTeX. Using • an array structure to place items in a rigid row/column environment; • \begin{matrix}...\end{matrix} from the amsmath package, which allows you to specify the matrix delimiters yourself (using \left and \right); • pmatrix, bmatrix, Bmatrix, vmatrix and Vmatrix variations to the above (also from amsmath) to fix the delimiters to ( ), [ ], { }, | |, and || ||, respectively; • \bordermatrix{...} which is a TeX command and will specify row and column indicies; • \kbordermatrix{...} which is similar to the above, but provides more flexibility; • the blkarray package and the associated blockarray and block environments to construct your matrix. Here is an example file showing some of the different styles: \documentclass{article} \usepackage{amsmath}% http://ctan.org/pkg/amsmath \usepackage{kbordermatrix}% http://www.hss.caltech.edu/~kcb/TeX/kbordermatrix.sty \usepackage{blkarray}% http://ctan.org/pkg/blkarray \begin{document} $\begin{array}{lc} \verb|array| & \left(\begin{array}{@{}ccc@{}} a & b & c \\ d & e & f \\ g & h & i \end{array}\right) \\[15pt] \verb|matrix| & \left(\begin{matrix} a & b & c \\ d & e & f \\ g & h & i \end{matrix}\right) \\[15pt] \verb|pmatrix| & \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \\[15pt] \verb|bmatrix| & \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \\[15pt] \verb|Bmatrix| & \begin{Bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{Bmatrix} \\[15pt] \verb|vmatrix| & \begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix} \\[15pt] \verb|Vmatrix| & \begin{Vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{Vmatrix} \\[15pt] \verb|bordermatrix| & \bordermatrix{\text{corner}&c_1&c_2&\ldots &c_n\cr r_1&a_{11} & 0 & \ldots & a_{1n}\cr r_2& 0 & a_{22} & \ldots & a_{2n}\cr r_3& \vdots & \vdots & \ddots & \vdots\cr r_4& 0 & 0 &\ldots & a_{nn}} \\[15pt] \verb|kbordermatrix| & \kbordermatrix{\text{corner}&c_1&c_2&\ldots &c_n\cr r_1&a_{11} & 0 & \ldots & a_{1n}\cr r_2& 0 & a_{22} & \ldots & a_{2n}\cr r_3& \vdots & \vdots & \ddots & \vdots\cr r_4& 0 & 0 &\ldots & a_{nn}} \\[25pt] \verb|blkarray| & \begin{blockarray}{[cc]c\}} 11 & 22 & 33 \\ 1 & 2 & 3 \\ \begin{block}{(ll)l\}} 11 & 22 & 33 \\ 1 & 2 & 3 \\ \end{block} 1 & 2 & 3 \end{blockarray} \end{array}$ \end{document} - This is the sort of thorough answer we need more of on this site! well done! – Seamus Aug 24 '11 at 21:24 @Seamus: Thanks! – Werner Aug 24 '11 at 21:27 I second @Seamus. Great answer, Werner! There should be a Welcome to the Matrix badge. =) – Paulo Cereda Aug 24 '11 at 23:21 There is not spoon... I mean badge. :-| – Werner Aug 24 '11 at 23:25 "Unfortunately, no one can tell you what the matrix is. Except for Werner, he seems to do it pretty well." – Niel de Beaudrap Aug 25 '11 at 0:04 You shouldn't use \matrix{ but \begin{matrix} and \end{matrix} are provided by the amsmath package. - It's strongly recommended to use amsmath's matrix features. However, answering your question: you can find the definition of \matrix in plain.tex: \def\matrix#1{\null\,\vcenter{\normalbaselines\m@th \ialign{\hfil$##$\hfil&&\quad\hfil$##$\hfil\crcr \mathstrut\crcr\noalign{\kern-\baselineskip} #1\crcr\mathstrut\crcr\noalign{\kern-\baselineskip}}}\,} Related: \def\pmatrix#1{\left(\matrix{#1}\right)} You can find plain.tex by typing on the command prompt kpsewhich plain.tex which gives on a current standard Windows TeX Live installation, for example c:/texlive/2011/texmf-dist/tex/plain/base/plain.tex \matrix is documented in the TeX book and various other TeX documentation. LaTeX documentation is mostly about the more modern matrix environment of amsmath. - So, why doesn't \matrix work in my LaTeX implementation, if its part of TeX? – bobobobo Aug 24 '11 at 23:53 @bobobobo: MathJax also reports the use of environments which includes amsmath's \begin{matrix}...\end{matrix}. Not sure what happened to \matrix{...}. – Werner Aug 24 '11 at 23:59 @bobobobo: The quick answer is that "plain TeX" is not plain TeX. "plain TeX" is an extension of TeX much as LaTeX is and LaTeX is not built on top of "plain TeX". – Andrew Stacey Aug 25 '11 at 7:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957149863243103, "perplexity": 4484.7339046754505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699856050/warc/CC-MAIN-20130516102416-00098-ip-10-60-113-184.ec2.internal.warc.gz"}
https://learning.subwiki.org/w/index.php?title=Understanding_mathematical_definitions&diff=next&oldid=775
# Difference between revisions of "Understanding mathematical definitions" Understanding mathematical definitions refers to the process of understanding the meaning of definitions in mathematics. ## List of steps Understanding a definition in mathematics is a complicated and laborious process. The following table summarizes some of the things one might do when trying to understand a new definition. Step Condition Description Purpose Examples Type-checking and parsing Parse each expression in the definition and understand its type. It's easy to become confused when you don't know the meanings of expressions used in a definition. So the idea is to avoid this kind of error. [1] Checking assumptions of objects introduced Remove or alter each assumption of the objects that have been introduced in the definition to see why they are necessary. Generally you want definitions to be "expansive" in the sense of applying to many different objects. But each assumption you introduce whittles down the number of objects the definition applies to. In other words, there is tension between (1) trying to have expansive definitions, and (2) adding in assumptions/restrictions in a definition. So you want to make sure each assumption pays its rent so that you don't make a definition narrower than it needs to be. In the definition of convergence of a function at a point, Tao requires that $x_0$ must be adherent to $E$. He then says that it is not worthwhile to define convergence when $x_0$ is not adherent to $E$. (The idea is for the reader to make sure they understand why this assumption is good to have.) Coming up with examples Come up with some examples of objects that fit the definition. Emphasize edge cases. Examples help to train your intuition of what the object "looks like". For monotone increasing functions, an edge case would be the constant function. Coming up with counterexamples As with coming up with examples, the idea is to train your intuition. But with counterexamples, you do it by making sure your conception of what the object "looks like" isn't too inclusive. [2] Writing out a wrong version of the definition See this post by Tim Gowers (search "wrong versions" on the page). Understanding the kind of definition Generally a definition will do one of the following things: (1) it will construct a brand new type of object (e.g. definition of a function); (2) it will take an existing type of object and create a predicate to describe some subclass of that type of object (e.g. take the integers and create the predicate even); (3) it will define an operation on some class of objects (e.g. take integers and define the operation of addition). Checking well-definedness If the definition defines an operation Checking that addition on the integers is well-defined. Checking consistency with existing definition If the definition supersedes an older definition or it clobbers up a previously defined notation Addition on the reals after addition on the rationals has been defined. For any function $f:X\to Y$ and $U\subset Y$, the inverse image $f^{-1}(U)$ is defined. On the other hand, if a function $f : X\to Y$ is a bijection, then $f^{-1} : Y \to X$ is a function, so its forward image $f^{-1}(U)$ is defined given any $U\subset Y$. We must check that these two are the same set (or else have some way to disambiguate which one we mean). (This example is mentioned in both Tao's Analysis I and in Munkres's Topology.) Disambiguating similar-seeming concepts The idea is that sometimes, two different definitions "step on" the same intuitive concept that someone has. (Example from Tao) "Disjoint" and "distinct" are both terms that apply to two sets. They even sound similar. Are they the same concept? Does one imply the other? It turns out, the answer is "no" to both: $\{1,2\}$ and $\{2,3\}$ are distinct but not disjoint, and $\emptyset$ and $\emptyset$ are disjoint but not distinct. Partition of a set vs partition of an interval. In metric spaces, the difference between bounded and totally bounded. They are not the same concept in general, but one implies the other, so one should prove an implication and find a counterexample. However, in certain metric spaces (e.g. Euclidean spaces) the two concepts are identical, so one should prove the equivalence. Sequantially compact vs covering compact: equivalent in metric spaces, but not true for more general topological spaces. Cauchy sequence vs convergent sequence: equivalent in complete metric spaces, but not equivalent in general (although convergent implies Cauchy in general). However, even incomplete metric spaces can be completed, so the two ideas sort of end up blurring together. Googling around/reading alternative texts Sometimes a definition is confusingly written (in one textbook) or the concept itself is confusing (e.g. because it is too abstract). It can help to look around for alternative expositions, especially ones that try to explain the intuitions/historical motivations of the definition. See also learning from multiple sources. In mathematical logic, the terminology for formal languages is a mess: some books define a structure as having a domain and an interpretation (so structure = (domain, interpretation)), while others define the same thing as interpretation = (domain, denotations), while still others define it as structure = (domain, signature, interpretation). The result is that in order to not be confused when e.g. reading an article online, one must become familiar with a range of definitions/terminology for the same concepts and be able to quickly adjust to the intended one in a given context. To give another example from mathematical logic, there is the expresses vs captures distinction. But different books use terminology like arithmetically defines vs defines, represents vs expresses, etc. So again things are a mess. Drawing a picture Pugh's Real Mathematical Analysis, Needham's Visual Complex Analysis. Chunking/processing level by level If a definition involves multiple layers of quantifiers. See Tao's definitions for $\varepsilon$-close, eventually $\varepsilon$-close, $\varepsilon$-adherent, etc. Asking some stock questions for a given field In computability theory, you should always be asking "Is this function total or partial?" or else you risk becoming confused. In linear algebra (when done in a coordinate-free way) one should always ask "is this vector space finite-dimensional?" I think some other fields also have this kind of question that you should always be asking of objects. ## Ways to speed things up There are several ways to speed up/skip steps in the above table, so that one doesn't spend too much time on definitions. ### Lazy understanding One idea is to skip trying to really grok a definition at first, and see what bad things might happen. The idea is to then only come back to the definition when one needs details from it. This is similar to lazy evaluation in programming. ### Building off similar definitions If a similar definition has just been defined (and one has taken the time to understand it), a similar definition will not need as much time to understand (one only needs to focus on the differences between the two definitions). For instance, after one has understood set union, one can relatively quickly understand set intersection. ### Relying on experience and intuition Eventually, after one has studied a lot of mathematics, understanding definitions becomes more automatic. One can gain an intuition of which steps are important for a particular definition, or when to spend some time and when to move quickly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899995684623718, "perplexity": 503.7307578366246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00422.warc.gz"}
https://support.sas.com/documentation/cdl/en/proc/65145/HTML/default/n1ou6vf87l14t1n165z15qbzp4za.htm
# FONTPATH Statement Specifies one or more directories to be searched for valid font files to process. See: ## Syntax FONTPATH 'directory' <…'directory'>; ### Required Argument directory specifies a directory to search. All files that are recognized as valid font files are processed. Each directory must be enclosed in quotation marks. If you specify more than one directory, then you must separate the directories with a space. Operating Environment Information: In the Windows operating environment only, you can locate the fonts folder if you do not know where the folder resides. In addition, you can register system fonts without having to know where the fonts are located. To find this information, submit the following program: ```proc fontreg; fontpath "%sysget(systemroot)\fonts"; run;``` The %SYSGET macro retrieves the value of the Windowing environment variable SYSTEMROOT, and resolves to the location of your system directory. The fonts subdirectory is located one level below the system directory.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975715696811676, "perplexity": 3307.4080485479903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00095.warc.gz"}
https://phys.libretexts.org/Bookshelves/Quantum_Mechanics/Book%3A_Introductory_Quantum_Mechanics_(Fitzpatrick)/8%3A_Central_Potentials/8.2%3A_Infinite_Spherical_Potential_Well
$$\require{cancel}$$ 8.2: Infinite Spherical Potential Well Consider a particle of mass $$m$$ and energy $$E>0$$ moving in the following simple central potential: $V(r) = \left\{\begin{array}{lcl} 0&\,&\mbox{for 0\leq r\leq a}\\[0.5ex] \infty&&\mbox{otherwise} \end{array}\right..$ Clearly, the wavefunction $$\psi$$ is only non-zero in the region $$0\leq r \leq a$$. Within this region, it is subject to the physical boundary conditions that it be well behaved (i.e., square-integrable) at $$r=0$$, and that it be zero at $$r=a$$. (See Section [s5.2].) Writing the wavefunction in the standard form $\label{e9.27} \psi(r,\theta,\phi) = R_{n,l}(r)\,Y_{l,m}(\theta,\phi),$ we deduce (see the previous section) that the radial function $$R_{n,l}(r)$$ satisfies $\frac{d^{\,2} R_{n,l}}{dr^{\,2}} + \frac{2}{r}\frac{dR_{n,l}}{dr} + \left[k^{\,2} - \frac{l\,(l+1)}{r^{\,2}}\right] R_{n,l} = 0$ in the region $$0\leq r \leq a$$, where $\label{e9.29} k^{\,2} = \frac{2\,m\,E}{\hbar^{\,2}}.$Defining the scaled radial variable $$z=k\,r$$, the previous differential equation can be transformed into the standard form $\frac{d^{\,2} R_{n,l}}{dz^{\,2}} + \frac{2}{z}\frac{dR_{n,l}}{dz} + \left[1 - \frac{l\,(l+1 )}{z^{\,2}}\right] R_{n,l} = 0.$ The two independent solutions to this well-known second-order differential equation are called spherical Bessel functions, and can be written \begin{aligned} j_l(z)&= z^{\,l}\left(-\frac{1}{z}\frac{d}{dz}\right)^l\left(\frac{\sin z}{z}\right),\\[0.5ex] y_l(z)&= -z^{\,l}\left(-\frac{1}{z}\frac{d}{dz}\right)^l\left(\frac{\cos z}{z}\right).\end{aligned} Thus, the first few spherical Bessel functions take the form \begin{aligned} j_0(z) &= \frac{\sin z}{z},\\[0.5ex] j_1(z)&=\frac{\sin z}{z^{\,2}} - \frac{\cos z}{z},\\[0.5ex] y_0(z) &= - \frac{\cos z}{z},\\[0.5ex] y_1(z) &= - \frac{\cos z}{z^{\,2}} - \frac{\sin z}{z}.\end{aligned} These functions are also plotted in Figure [sph]. It can be seen that the spherical Bessel functions are oscillatory in nature, passing through zero many times. However, the $$y_l(z)$$ functions are badly behaved (i.e., they are not square integrable) at $$z=0$$, whereas the $$j_l(z)$$ functions are well behaved everywhere. It follows from our boundary condition at $$r=0$$ that the $$y_l(z)$$ are unphysical, and that the radial wavefunction $$R_{n,l}(r)$$ is thus proportional to $$j_l(k\,r)$$ only. In order to satisfy the boundary condition at $$r=a$$ [i.e., $$R_{n,l}(a)=0$$], the value of $$k$$ must be chosen such that $$z=k\,a$$ corresponds to one of the zeros of $$j_l(z)$$. Let us denote the $$n$$th zero of $$j_l(z)$$ as $$z_{n,l}$$. It follows that $k\,a = z_{n,l},$ for $$n=1,2,3,\ldots$$. Hence, from Equation ([e9.29]), the allowed energy levels are $\label{e9.39} E_{n,l} = z_{n,l}^{\,2}\,\frac{\hbar^{\,2}}{2\,m\,a^{\,2}}.$ The first few values of $$z_{n,l}$$ are listed in Table [tsph]. It can be seen that $$z_{n,l}$$ is an increasing function of both $$n$$ and $$l$$. The first few zeros of the spherical Bessel function $$j_l(z)$$. $$n=1$$ $$n=2$$ $$n=3$$ $$n=4$$ $$l=0$$ 3.142 6.283 9.425 12.566 [0.5ex] $$l=1$$ 4.493 7.725 10.904 14.066 [0.5ex] $$l=2$$ 5.763 9.095 12.323 15.515 [0.5ex] $$l=3$$ 6.988 10.417 13.698 16.924 [0.5ex] $$l=4$$ 8.183 11.705 15.040 18.301 We are now in a position to interpret the three quantum numbers— $$n$$, $$l$$, and $$m$$—which determine the form of the wavefunction specified in Equation ([e9.27]). As is clear from Chapter [sorb], the azimuthal quantum number $$m$$ determines the number of nodes in the wavefunction as the azimuthal angle $$\phi$$ varies between 0 and $$2\pi$$. Thus, $$m=0$$ corresponds to no nodes, $$m=1$$ to a single node, $$m=2$$ to two nodes, et cetera. Likewise, the polar quantum number $$l$$ determines the number of nodes in the wavefunction as the polar angle $$\theta$$ varies between 0 and $$\pi$$. Again, $$l=0$$ corresponds to no nodes, $$l=1$$ to a single node, et cetera. Finally, the radial quantum number $$n$$ determines the number of nodes in the wavefunction as the radial variable $$r$$ varies between 0 and $$a$$ (not counting any nodes at $$r=0$$ or $$r=a$$). Thus, $$n=1$$ corresponds to no nodes, $$n=2$$ to a single node, $$n=3$$ to two nodes, et cetera. Note that, for the case of an infinite potential well, the only restrictions on the values that the various quantum numbers can take are that $$n$$ must be a positive integer, $$l$$ must be a non-negative integer, and $$m$$ must be an integer lying between $$-l$$ and $$l$$. Note, further, that the allowed energy levels ([e9.39]) only depend on the values of the quantum numbers $$n$$ and $$l$$. Finally, it is easily demonstrated that the spherical Bessel functions are mutually orthogonal: that is, $\int_0^a j_l(z_{n,l}\,r/a)\,j_{l}(z_{n',l}\,r/a) \,r^{\,2}\,dr = 0$ when $$n\neq n'$$ . Given that the $$Y_{l,m}(\theta,\phi)$$ are mutually orthogonal (see Chapter [sorb]), this ensures that wavefunctions ([e9.27]) corresponding to distinct sets of values of the quantum numbers $$n$$, $$l$$, and $$m$$ are mutually orthogonal. Contributors • Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin) $$\newcommand {\ltapp} {\stackrel {_{\normalsize<}}{_{\normalsize \sim}}}$$ $$\newcommand {\gtapp} {\stackrel {_{\normalsize>}}{_{\normalsize \sim}}}$$ $$\newcommand {\btau}{\mbox{\boldmath\tau}}$$ $$\newcommand {\bmu}{\mbox{\boldmath\mu}}$$ $$\newcommand {\bsigma}{\mbox{\boldmath\sigma}}$$ $$\newcommand {\bOmega}{\mbox{\boldmath\Omega}}$$ $$\newcommand {\bomega}{\mbox{\boldmath\omega}}$$ $$\newcommand {\bepsilon}{\mbox{\boldmath\epsilon}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819526076316833, "perplexity": 210.19671435151477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144637.88/warc/CC-MAIN-20200220035657-20200220065657-00330.warc.gz"}
https://math.stackexchange.com/questions/2314865/lim-limits-x-to-a-fx-l-and-lim-limits-x-to-l-gx-m-implies-lim
# $\lim\limits_{x \to a} f(x)=L$ and $\lim\limits_{x \to L} g(x)=M$ implies $\lim\limits_{x \to a} g(f(x))=M$? Let $f,g:\mathbb{R}\to\mathbb{R}$ and $a,L \in \mathbb{R}$ If $\lim\limits_{x \to a} f(x)=L$ and $\lim\limits_{x \to L} g(x)=M$ Then $\lim\limits_{x \to a} g(f(x))=M$? My proof: Since $\lim\limits_{x \to a} f(x)=L$, Let $\epsilon>0$,there exists a $\delta_1>0$ s.t $|x-a|<\delta_1\Rightarrow |f(x)-L|<\epsilon$ And $\lim\limits_{x \to L} g(x)=M$, Let $\epsilon>0$,there exists a $\delta_2>0$ s.t $|x-L|<\delta_2\Rightarrow |g(x)-L|<\epsilon$ Let $\delta:=min\{\delta_1,\delta_2\}$ For all $\epsilon>0$ there exists a $\delta>0$ such that $|x-a|<\delta\Rightarrow|g(f(x))-M|<\epsilon$ Is my proof true or not? • You only know that $\lvert f(x) - L\rvert < \epsilon$, but to conclude $\lvert g(f(x)) - M\rvert < \epsilon$, you need $\lvert f(x) - L\rvert < \delta_2$. – Daniel Fischer Jun 8 '17 at 15:59 • Also note that some people use a definition of $\lim_{x\to a} f(x)$ that restricts $x$ to be different from $a$ (the implication is then $0 < \lvert x-a\rvert < \delta \implies \lvert f(x) - L\rvert < \epsilon$). With that definition of limit, it would not follow that $\lim_{x\to a} g(f(x)) = M$. – Daniel Fischer Jun 8 '17 at 16:02 • @DanielFischer, could you please give an example when it would fail? – md2perpe Jun 8 '17 at 21:12 • @md2perpe $f(x) = 0$ for all $x$, and $g(x) = 0$ for $x\neq 0$, $g(0) = 1$. Then $\lim_{x\to 0} f(x) = 0$ (for both definitions of the limit), and $\lim_{x\to 0} g(x) = 0$ for the definition of limit that excludes $0$ (the limit doesn't exist in the definition that includes $0$ as an allowed argument), and $\lim_{x\to 0} g(f(x)) = 1$. Replace $f$ with $h(x) = x\sin (1/x)$ [and $h(0) = 0$ of course] and $\lim_{x\to 0} g(h(x))$ doesn't exist. – Daniel Fischer Jun 8 '17 at 21:20 • Thanks. If my analysis is correct, the limit of the composition would work if we also demanded that $0<|f(x)-a|<\epsilon$, but that would unfortunately make $f(x) = L$ for all $x$ not converge to $L.$ The question of what definition to use (including or excluding $0$ for $|x-a|$) is delicate; both obviously have their pros and cons. – md2perpe Jun 9 '17 at 5:06 It is incorrect. You should first choose a $\delta_1 >0$ s.t. $\lvert y - L \rvert < \delta_1 \Rightarrow \lvert g(y) - M \rvert < \epsilon$ for arbitrary $\epsilon > 0$. Then choose $\delta_2 > 0$ such that $\lvert x-a \rvert < \delta_2 \Rightarrow \lvert f(x) - L \rvert < \delta_1.$ First is $|g(x) - M|$, not $|g(x) - L|$ . You have to do this: Since $\lim_{y \to L}g(y)=M$ Let $ϵ>0$, there exists a $\rho>0$ s.t $|y−L|<\rho⇒|g(y)−M|<ϵ.$ Then since $\lim_{x \to a}f(x)=L$ You can take $\rho$ in the following statement. For $\rho>0,$ there exists a $δ>0$ s.t $|x−a|<δ⇒|f(x)−L|< \rho$. Since $|f(x) - L| < \rho$, then $|g(f(x)) - M| < ϵ$. Finally $|x-a|<δ ⇒ |g(f(x)) - M| < ϵ$. By definition: $\lim_{x→a} g(f(x))=M$. The problem in your demonstration is that you don't make sure $|f(x)−L| < δ_2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657111167907715, "perplexity": 190.08059317746566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00224.warc.gz"}
https://www.researchgate.net/publication/244329024_Bohmian_mechanics_without_pilot_waves
Article # Bohmian mechanics without pilot waves Authors: To read the full-text of this research, you can request a copy directly from the author. ## Abstract In David Bohm’s causal/trajectory interpretation of quantum mechanics, a physical system is regarded as consisting of both a particle and a wavefunction, where the latter “pilots” the trajectory evolution of the former. In this paper, we show that it is possible to discard the pilot wave concept altogether, thus developing a complete mathematical formulation of time-dependent quantum mechanics directly in terms of real-valued trajectories alone. Moreover, by introducing a kinematic definition of the quantum potential, a generalized action extremization principle can be derived. The latter places very severe a priori restrictions on the set of allowable theoretical structures for a dynamical theory, though this set is shown to include both classical mechanics and quantum mechanics as members. Beneficial numerical ramifications of the above, “trajectories only” approach are also discussed, in the context of simple benchmark applications. ## No full-text available ... Curiously, the particular choices made here can be interpreted as determining the precise form of the "interworld potential"�according to the "discrete" many-interacting-worlds (MIW) interpretation of quantum mechanics that has sprung up from the trajectorybased reformulation. 39 In the original, "continuous" MIW interpretation, however, 36,38 these are merely choices for the numerical discretization. ... ... Accordingly, in this work, we instead invoke the exact factorization formalism in combination with the interacting quantum trajectories approach presented above. 36,38 The exact factorization yields nuclear dynamics under the effect of a single time-dependent classical force accounting for the excited electronic states, thereby avoiding the aforementioned technical difficulty. ... ... One of the simplest choices is to take C to be the initial value of a given trajectory at time t = 0�that is, C = x 0 = x(t = 0). 36 Through probability conservation [i.e., eq 15], one then obtains the following relation for the density at any time t ... Article We present a quantum dynamics method based on the propagation of interacting quantum trajectories to describe both adiabatic and nonadiabatic processes within the same formalism. The idea originates from the work of Poirier [Chem. Phys.2010,370, 4-14] and Schiff and Poirier [J. Chem. Phys.2012,136, 031102] on quantum dynamics without wavefunctions. It consists of determining the quantum force arising in the Bohmian hydrodynamic formulation of quantum dynamics using only information about quantum trajectories. The particular time-dependent propagation scheme proposed here results in very stable dynamics. Its performance is discussed by applying the method to analytical potentials in the adiabatic regime, and by combining it with the exact factorization method in the nonadiabatic regime. ... On the other hand, what if there were no need of a wavefunction at all in a quantum theory? In recent years, attempts have been made to formulate a complete standalone theory of quantum mechanics without wavefunctions [12,13,14,15,16,17,18,19,20,21,22,23]. In particular, in 2010, one of the authors (B. ... ... In particular, in 2010, one of the authors (B. Poirier) proposed a theoretical framework in which a quantum state is represented solely by an ensemble of real-valued probabilistic trajectories [14,17]. With the notable exception of spin, this nonrelativistic version of the trajectory-based theory turns out to be formally mathematically equivalent to the standard wave-based Schrödinger equation [12,13,14,17]-though it can be derived completely independently [14,17]. ... ... Poirier) proposed a theoretical framework in which a quantum state is represented solely by an ensemble of real-valued probabilistic trajectories [14,17]. With the notable exception of spin, this nonrelativistic version of the trajectory-based theory turns out to be formally mathematically equivalent to the standard wave-based Schrödinger equation [12,13,14,17]-though it can be derived completely independently [14,17]. More recently, a discrete version of the trajectory-based theory has also been proposed [20,21,22,23], which is not consistent with the Schrödinger equation except in the continuous limit. ... Article Full-text available We present novel aspects of a trajectory-based theory of massive spin-zero relativistic quantum particles. In this approach, the quantum trajectory ensemble is the fundamental entity. It satisfies its own action principle, leading to a dynamical partial differential equation (via the Euler-Lagrange procedure), as well as to conservation laws (via Noether’s theorem). In this paper, we focus on the derivation of the latter. In addition to the usual expected energy and momentum conservation laws, there is also a third law that emerges, associated with the conditions needed to maintain global simultaneity. We also show that the nonrelativistic limits of these conservation laws match those of the earlier, nonrelativistic quantum trajectory theory [J. Chem. Phys. 136, 031102 (2012)]. ... As a promising alternative to the QTM pioneered by Wyatt, a fully wave function-free formulation of quantum mechanics has been developed [38]− [41], and first applications to atomic scattering have only very recently been published [42]. In this approach, the time-dependent quantum mechanical problem is recast into a dynamical problem of a parameterized density. ... ... It is worth to notice that previous work regarding the specific parametrization of the density used in this thesis involve its application to the description of model systems only [38]− [41]. The method implemented in this thesis gives accurate results, captures wellknown quantum effects and is a promising alternative to already existing, standard wave packet methods. ... ... Together with this theoretical background, we introduced the QTM, as the numerical implementation of the equation of motion of the quantum trajectories. As a promising alternative to the QTM pioneered by Wyatt, a fully wave function-free formulation of quantum mechanics has been proposed [38]− [41]. ... Thesis In this thesis different trajectory-based methods for the study of quantum mechanical phenomena are developed. The first approach is based on a global expansion of the hydrodynamic fields in Chebyshev polynomials. The scheme is used for the study of one-dimensional vibrational dynamics of bound wave packets in harmonic and anharmonic potentials. Furthermore, a different methodology is developed, which, starting from a parametrization previously proposed for the density, allows the construction of effective interaction potentials between the pseudo-particles representing the density. Within this approach several model problems are studied and important quantum mechanical effects such as, zero point energy, tunneling, barrier scattering and over barrier reflection are founded to be correctly described by the ensemble of interacting trajectories. The same approximation is used for study the laser-driven atom ionization. A third approach considered in this work consists in the derivation of an approximate many-body quantum potential for cryogenic Ar and Kr matrices with an embedded Na impurity. To this end, a suitable ansatz for the ground state wave function of the solid is proposed. This allows to construct an approximate quantum potential which is employed in molecular dynamics simulations to obtain the absorption spectra of the Na impurity isolated in the rare gas matrix. ... In this paper, we present a quantum trajectory capture (QTC) technique that stems directly from a recent theoretical development in the exact formulation of quantum mechanics by one of the authors. 22,23 In this theory, a trajectory ensemble-rather than the usual wavefunction-is regarded as the fundamental quantum state entity. When applied in a time-independent 1D quantum reactive scattering context, the ensemble reduces to a single quantum trajectory-which necessarily always transmits from reactants to products, no matter how large the intervening barrier is. ... ... The quantum trajectory method outlined above has previously been applied in the context of abstraction reactions. [22][23][24] Here, as a proof-of-concept benchmark, we compute adiabaticchannel QTC probabilities and cross-sections for the Li + CaH(v = 0, j = 0) → LiH + Ca reaction, which we then compare with QM finite-difference 25,26 capture calculations performed previously by Tscherbul and Buchachenko. 27 ... ... Since QTC is essentially a scattering process, we start with a short discussion of a standard 1D quantum reactive scattering problem, from the perspective of the trajectoryensemble-based quantum theory. [22][23][24] Generalized boundary conditions as required for the capture process are introduced afterwards. ... Article Full-text available The Langevin capture model is often used to describe barrierless reactive collisions. At very low temperatures, quantum effects may alter this simple capture image and dramatically affect the reaction probability. In this paper, we use the trajectory-ensemble reformulation of quantum mechanics, as recently proposed by one of the authors (Poirier) to compute adiabatic-channel capture probabilities and cross-sections for the highly exothermic reaction Li + CaH(v = 0, j = 0) → LiH + Ca, at low and ultra-low temperatures. Each captured quantum trajectory takes full account of tunneling and quantum reflection along the radial collision coordinate. Our approach is found to be very fast and accurate, down to extremely low temperatures. Moreover, it provides an intuitive and practical procedure for determining the capture distance (i.e., where the capture probability is evaluated), which would otherwise be arbitrary. ... It resembles Bohmian mechanics in that quantum trajectories are indeed employed. However, unlike Bohmian mechanics, only trajectories are used-the wave being replaced with a trajectory ensemble, which thereby represents the quantum state [12][13][14][15][16][17][18][19][20]. The trajectory ensemble is continuous, with each individual member trajectory labeled by the parameter C. The time evolution of the trajectory ensemble, x(t, C), is governed by some partial differential equation (PDE) in (t, C) that replaces the usual timeindependent Schrödinger equation governing the Ψ(t, x) evolution. ... ... Note that Eq. (13) implies a specific parametrization for C-i.e., for any given trajectory, C is the initial x value, x 0 [14,21]. Likewise, our earlier specification of the natural time coordinate, λ = T , implies the following as the one remaining initial condition: ... ... From a numerical standpoint, the solution of Eq. (17) offers important advantages over both conventional, x-grid-based Crank-Nicholson propagation of Ψ, and traditional quantum trajectory methods [28]. Briefly, one has the simultaneous advantages of both a regular grid (in C) and probability-conserving trajectories (in x) [14]. There are nevertheless some nontrivial numerical issues, stemming from the fact that the true boundary conditions are unknownunlike for Ψ propagation, for which Dirichlet boundary conditions are in effect. ... Article Full-text available In the context of nonrelativistic quantum mechanics, Gaussian wavepacket solutions of the time-dependent Schrödinger equation provide useful physical insight. This is not the case for relativistic quantum mechanics, however, for which both the Klein-Gordon and Dirac wave equations result in strange and counterintuitive wavepacket behaviors, even for free-particle Gaussians. These behaviors include zitterbewegung and other interference effects. As a potential remedy, this paper explores a new trajectory-based formulation of quantum mechanics, in which the wavefunction plays no role [Phys. Rev. X, 4, 040002 (2014)]. Quantum states are represented as ensembles of trajectories, whose mutual interaction is the source of all quantum effects observed in nature—suggesting a "many interacting worlds" interpretation. It is shown that the relativistic generalization of the trajectory-based formulation results in well-behaved free-particle Gaussian wavepacket solutions. In particular, probability density is positive and well-localized everywhere, and its spatial integral is conserved over time—in any inertial frame. Finally, the ensemble-averaged wavepacket motion is along a straight line path through spacetime. In this manner, the pathologies of the wave-based relativistic quantum theory, as applied to wavepacket propagation, are avoided. ... The present theory departs from the well-known Bohmian interpretation of quantum mechanics due to deBroglie and Bohm [1][2][3][4][5], as well as the class of so-called trajectory-based models to be found in the literature. The latter are usually realized in the context of non-relativistic quantum mechanics [6][7][8][9][10][11][12][13], their primary relevance from the practical viewpoint being essentially restricted to the development of Lagrangian numerical solution methods [14][15][16][17][18][19][20], although significant examples actually occur for relativistic quantum systems [21] too. The GLP-approach is intended to apply in principle both to 1-and N -body non-relativistic quantum systems S N , formed by an arbitrary number N of like quantum point-particles of mass m which satisfy the N -body SE. ... ... Actually, as will be discussed in detail in the paper, these features place the new theory apart and distinguish it from the class of Bohmian-like theories usually considered in the literature [7][8][9][10][11][12][22][23][24]. The GLP-approach, in fact, is based on a new type of Lagrangian stochastic parametrization for the non-relativistic quantum wave function ψ ≡ ψ (r, t) obtained by means of the adoption of so-called generalized Lagrangian paths (GLPs) (see Ref. [25]). ... ... In the following (see in particular Sects. 7,8,9) we intend to show that this is indeed the case, providing for this purpose a number of examples holding for quantum N -body systems which include: (a) the free-particle case; (b) the elastic attractive/repulsive force; (c) the Coulomb-like and the power-law central force; (d) the Van der Waals force. ... Article Full-text available In this paper a new trajectory-based representation to non-relativistic quantum mechanics is formulated. This is ahieved by generalizing the notion of Lagrangian path (LP) which lies at the heart of the deBroglie-Bohm “ pilot-wave” interpretation. In particular, it is shown that each LP can be replaced with a statistical ensemble formed by an infinite family of stochastic curves, referred to as generalized Lagrangian paths (GLP). This permits the introduction of a new parametric representation of the Schrödinger equation, denoted as GLP-parametrization, and of the associated quantum hydrodynamic equations. The remarkable aspect of the GLP approach presented here is that it realizes at the same time also a new solution method for the N-body Schrödinger equation. As an application, Gaussian-like particular solutions for the quantum probability density function (PDF) are considered, which are proved to be dynamically consistent. For them, the Schrödinger equation is reduced to a single Hamilton–Jacobi evolution equation. Particular solutions of this type are explicitly constructed, which include the case of free particles occurring in 1- or N-body quantum systems as well as the dynamics in the presence of suitable potential forces. In all these cases the initial Gaussian PDFs are shown to be free of the spreading behavior usually ascribed to quantum wave-packets, in that they exhibit the characteristic feature of remaining at all times spatially-localized. ... The first of two recent but entirely independent developments in the foundations of quantum theory is a number of 'trajectory-only' formulations of quantum theory [1,2,3,4,5,6,7], though also see [8], intended to recover quantum mechanics without reference to a physical wavefunction. Two related approaches which however leads to an experimentally distinguishable theory are the real-ensemble formulations of [9,10]. ... ... The approach of Schiff and Poirier [5,6] is different in that their formulation does not rely on the introduction of a quantum potential in the usual form. Instead, they consider higher-order time derivatives of the variables to appear in the expressions for the Lagrangian and energy of their theory. ... ... Since this is to hold for any interval [t 0 , t 1 ], we have in general that p i = ∂ i S. This equation can be seen as determining S up to an arbitrary additive function of time. 5 It only restricts the momenta p i in that these must be the gradient of some function. Of course, S turns out to have all the properties of a generator of those canonical transformations that represent the time evolution of the system. ... Article We illustrate how non-relativistic quantum mechanics may be recovered from a dynamical Weyl geometry on configuration space and an ensemble' of trajectories (or worlds'). The theory, which is free of a physical wavefunction, is presented starting from a classical many-systems' action to which a curvature term is added. In this manner the equations of equilibrium de~Broglie-Bohm theory are recovered. However, na\"ively the set of solution precludes solutions with non-zero angular momentum (a version of a problem raised by Wallstrom). This is remedied by a slight extension of the action, leaving the equations of motion unchanged. ... The first question has been addressed by Holland, Poirier and Hall et al. in recent work [6,5,7]. Interestingly, these authors show that Bohm's quantum trajectories can be obtained without a guiding wave function from a reformulated theory that prescribes the equations of motion for the particles, along with a probability distribution for the resulting particle trajectories. ... ... Additional differences, for example how amenable each formulation is to numerical evaluation, will not be pursued in any detail here. The first formulation, supported by Bohm [1], Bell [3] and others [9] firmly assumes a pilot-wave, which is a solution of the Schrödinger equation, as part of reality; the two other formulations make no explicit reference to a wave function or Schrödinger equation [6,5,7]. ... ... To keep notations simple and following refs. [5,7], this section will consider quantum mechanics of a single particle without spin in one dimension. It is mostly straightforward to extend the discussion to non-relativistic quantum mechanics for multiple spin zero particles in three space dimensions. ... Article After summarizing three versions of trajectory-based quantum mechanics, it is argued that only the original formulation due to Bohm, which uses the Schr\"odinger wave function to guide the particles, can be readily extended to particles with spin. To extend the two wave function-free formulations, it is argued that necessarily particle trajectories not only determine location, but also spin. Since spin values are discrete, it is natural to revert to a variation of Bohm's pilot wave formulation due originally to Bell. It is shown that within this formulation with stochastic quantum trajectories, a wave function free formulation can be obtained. ... The theory is based on ideas initially published as a preprint draft [5], which has been completely re-worked and enhanced, in particular by adding a logical framework to properly deal with propositions about physical systems in a multiplicity of worlds, and by providing the conceptual prerequisites for treating the collection of worlds as a continuous substance. After having finished and submitted an earlier version of this manuscript, I noticed that essentially the same theory, though with a stronger focus on formal aspects and less focus on ontological and epistemological matters, has independently been put forward by Poirier and Schiff [35,39]. Although already having been aware of, and having cited, these publications, I did not fully recognize how close their theory was to mine. ... ... In particular, the relation between objective reality and subjective experience in the presence of a multiplicity of worlds is addressed. My proposal is compared to Bohmian mechanics, to Tipler's formulation of quantum mechanics [43], to the MIW approach of Hall et al. [21], to Sebens' Newtonian QM [40], and to Poirier and Schiff's approach [35,39]. I will also respond to criticisms raised by [40,48] against the idea of a continuum of worlds. ... ... However, Madelung could not provide a consistent physical interpretation of this mathematical fact, so the hydrodynamical interpretation of quantum mechanics was abandoned. Recently, the hydrodynamic interpretation experienced a renaissance, and it was shown that the wavefunction can be completely removed from the theory, leaving only trajectories as the physically existing objects from where all observable values can be calculated [23,35,39]. However, these approaches leave it open as to how the fluid is interpreted physically. ... Article A non-relativistic quantum mechanical theory is proposed that describes the universe as a continuum of worlds whose mutual interference gives rise to quantum phenomena. A logical framework is introduced to properly deal with propositions about objects in a multiplicity of worlds. In this logical framework, the continuum of worlds is treated similarly to the continuum of time points; both "time" and "world" are considered as mutually independent modes of existence. The theory combines elements of Bohmian mechanics and of Everett's many-worlds interpretation; it has a clear ontology and a set of precisely defined postulates from where the predictions of standard quantum mechanics can be derived. Probability as given by the Born rule emerges as a consequence of insufficient knowledge of observers about which world it is that they live in. The theory describes a continuum of worlds rather than a single world or a discrete set of worlds, so it is similar in spirit to many-worlds interpretations based on Everett's approach, without being actually reducible to these. In particular, there is no splitting of worlds, which is a typical feature of Everett-type theories. Altogether, the theory explains 1) the subjective occurrence of probabilities, 2) their quantitative value as given by the Born rule, 3) the identification of observables as self-adjoint operators on Hilbert space, and 4) the apparently random "collapse of the wavefunction" caused by the measurement, while still being an objectively deterministic theory. ... More recently, it been observed by Holland [8] and by Poirier and coworkers [9][10][11] that the evolution of such quantum systems can be formulated without reference even to a momentum potential S. Instead, nonlinear Euler-Lagrange equations are used to define trajectories of a continuum of fluid elements, in an essentially hydrodynamical picture. The trajectories are labelled by a continuous parameter, such as the initial position of each element, and the equations involve partial derivatives of * Electronic address: [email protected] up to fourth order with respect to this parameter. ... ... However, it may be recovered, in a nontrivial manner, by integrating the trajectories up to any given time [8]. This has proved a useful tool for making efficient and accurate numerical calculations in quantum chemistry [9,10]. Schiff and Poirier [11], while "drawing no definite conclusions", interpret their formulation as a "kind of "many worlds" theory", albeit they have a continuum of trajectories (i.e. ... ... Here the left-hand side is to be understood as an approximation of the right-hand side, obtained via a suitable smoothing of the empirical density P (q) in Eq. (8), analogous to the approximation of the quantum force r N (q) by r N (x; X) in Eq. (9). It is important to note that a good approximation of the force (which is essential to obtain QM in the large N limit) is not guaranteed by a good approximation in Eq. (19). ... Article Full-text available We investigate whether quantum theory can be understood as the continuum limit of a mechanical theory, in which there is a huge, but finite, number of classical 'worlds', and quantum effects arise solely from a universal interaction between these worlds, without reference to any wave function. Here a world' means an entire universe with well-defined properties, determined by the classical configuration of its particles and fields. In our approach each world evolves deterministically; probabilities arise due to ignorance as to which world a given observer occupies; and we argue that in the limit of infinitely many worlds the wave function can be recovered (as a secondary object) from the motion of these worlds. We introduce a simple model of such a 'many interacting worlds' approach and show that it can reproduce some generic quantum phenomena---such as Ehrenfest's theorem, wavepacket spreading, barrier tunneling and zero point energy---as a direct consequence of mutual repulsion between worlds. Finally, we perform numerical simulations using our approach. We demonstrate, first, that it can be used to calculate quantum ground states, and second, that it is capable of reproducing, at least qualitatively, the double-slit interference phenomenon. ... Nevertheless, exactly just such a theory was recently formulated for non-relativistic quantum mechanics. [20][21][22][23][24][25][26] For a number of reasons, it makes sense to try to extend the previous work to the relativistic case. As presented in this document, this goal is now also achieved-at least in the context of a single, spin-zero, massive, relativistic quantum particle, propagating on a flat Minkowski spacetime, with no external fields. ... ... The crucial development is the recent wavefunction-free reformulation of nonrelativistic quantum mechanics, alluded to above. [20][21][22][23][24][25][26] This approach is trajectory based, and in that sense reminiscent of Bohmian mechanics. Unlike the Bohm theory, however, here, the traditional wavefunction, Ψ(t, x), is entirely done away with, in favor of the trajectory ensemble, x(t, C) (where C labels individual trajectories) as the fundamental representation of a quantum state. ... ... This is perhaps most physically meaningful if one adopts a "many worlds"-type ontological interpretation of the multiple particle paths/trajectories, according to which each trajectory worldline literally represents a different world, as has been discussed in previous work. 22,24,25 The one particle is thus comprised of many "copies," distributed across all space. Locally, the structure of the orthogonal subspace described above ensures that each particle copy agrees with its nearest neighbors as to which events occur simultaneously. ... Article In a recent paper [Bill Poirier, arXiv:1208.6260 [quant-ph]], a trajectory-based formalism has been constructed to study the relativistic dynamics of a single spin-zero quantum particle. Being a generally covariant theory, this formalism introduces a new notion of global simultaneity for accelerated quantum particles. In this talk, we present several examples based on this formalism, including the time evolution of a relativistic Gaussian wavepacket. Energy-momentum conservation relations may also be discussed. ... The QTM of this paper, though also approximate, treats barrier tunneling and reflection interference much more accurately than these other approaches (though it might be improved by applying the latter in the perpendicular directions, which are treated classically here). It stems from a broader, and quite recent, theoretical development2728293031 that regards the trajectory ensemble itself as the fundamental quantum state entity, rather than the wavefunction. The exact quantum propagation is therefore described as a partial differential equation (PDE) directly on the trajectory ensemble itself, i.e. with no reference whatsoever to W. The present QTM is then derived by replacing the exact PDE with an approximate ODE, thus severing all intertrajectory communication, circumventing the node problem, and resulting in a much more classical-like time evolution (with all commensurate computational advantages, e.g. ... ... Alternatively, it can be shown293031 that the physical quantum state represented by the wavefunction W(x), can also be represented exactly using a single trajectory x(t). This quantum trajectory , moreover, evolves in accordance with the following fourthorder ordinary differential equation (ODE): ... ... Note that Eq. (2) would be the usual second-order classical ODE of Newton, but for the last term, which represents the ''quantum force.'' [11,293031 In general, any fourthorder ODE such as Eq. (2) admits a four-parameter family of solution trajectories, x(t), that can be specified using four appropriate initial conditions, e.g.: x 0 = x(t = 0); _ x 0 ¼ _ xðt ¼ 0Þ; € x 0 ¼ € xðt ¼ 0Þ; x v 0 ¼ x v ðt ¼ 0Þ. ... Article A trajectory ensemble method is introduced that enables accurate computation of microcanonical quantum reactive scattering quantities, using a classical-like simulation scheme. Individual quantum trajectories are propagated independently, using a Newton-like ODE which treats quantum dynamical effects along the reaction coordinate exactly, and preserves the phase space volume element. The sampling of initial conditions resembles a classical microcanonical simulation, but modified so as to incorporate quantization in the perpendicular mode coordinates. The method is exact for one-dimensional or separable systems, and achieves ∼1% accuracy for the coupled multidimensional benchmark applications considered here, even in the deep tunneling regime. ... In classical mechanics, physical trajectories are obtained as the subset of dynamical paths that extremize the classical action. Until recently [2][3][4][5][6][7][8], it appears that no trajectory-based action extremization principle was known or even suspected in quantum mechanics -although the identification of the action S (in units of h) with the phase of the wavefunction goes back to the earliest days of the quantum theory [9][10][11], and still serves as the basis of modern semiclassical trajectory-based approximation methods [12][13][14][15][16][17][18]. ... ... The purpose of this paper is to evaluate whether or not BOMCA satisfies an action principle. This is motivated by the recent, startling discovery that the real-valued quantum trajectories of Bohmian mechanics do indeed satisfy a bona fide principle of least action [2][3][4][5][6][7][8], very similar to that of classical Lagrangian mechanics. The real-valued quantum action-extremizing trajectories turn out to be equivalent to the quantum trajectories of Bohmian mechanics. ... ... in addition to Equations (4) and (5). Whether working with the PDE or the ODE hierarchy, we take the primary defining feature of standard BOMCA to be that the complex trajectory velocity field is defined via Equations (1) and (4). ... Article Full-text available In a recent paper [B. Poirier, Chem. Phys. 370, 4 (2010)], a formulation of quantum mechanics was presented for which the usual wavefunction and Schrödinger equation are replaced with an ensemble of real-valued trajectories satisfying a principle of least action. It was found that the resultant quantum trajectories are those of Bohmian mechanics. In this paper, analogous ideas are applied to Bohmian Mechanics with Complex Action (BOMCA). The standard BOMCA trajectories as previously defined are found not to satisfy an action principle. However, an alternate set of complex equations of motion is derived that does exhibit this desirable property, and an approximate numerical implementation is presented. Exact analytical results are also presented, for Gaussian wavepacket propagation under quadratic potentials. ... Nevertheless, exactly just such a theory was recently formulated for non-relativistic quantum mechanics. [20][21][22][23][24][25][26] For a number of reasons, it makes sense to try to extend the previous work to the relativistic case. As presented in this document, this goal is now also achieved-at least in the context of a single, spin-zero, massive, relativistic quantum particle, propagating on a flat Minkowski spacetime, with no external fields. ... ... The crucial development is the recent wavefunction-free reformulation of nonrelativistic quantum mechanics, alluded to above. [20][21][22][23][24][25][26] This approach is trajectory based, and in that sense reminiscent of Bohmian mechanics. Unlike the Bohm theory, however, here, the traditional wavefunction, Ψ(t, x), is entirely done away with, in favor of the trajectory ensemble, x(t, C) (where C labels individual trajectories) as the fundamental representation of a quantum state. ... ... This is perhaps most physically meaningful if one adopts a "many worlds"-type ontological interpretation of the multiple particle paths/trajectories, according to which each trajectory worldline literally represents a different world, as has been discussed in previous work. 22,24,25 The one particle is thus comprised of many "copies," distributed across all space. Locally, the structure of the orthogonal subspace described above ensures that each particle copy agrees with its nearest neighbors as to which events occur simultaneously. ... Article Full-text available Recently, a self-contained trajectory-based formulation of non-relativistic quantum mechanics was developed [Ann. Phys. 315, 505 (2005); Chem. Phys. 370, 4 (2010); J. Chem. Phys. 136, 031102 (2012)], that makes no use of wavefunctions or complex amplitudes of any kind. Quantum states are represented as ensembles of real-valued quantum trajectories that extremize a suitable action. Here, the trajectory-based approach is developed into a viable, generally covariant, relativistic quantum theory for single (spin-zero, massive) particles. Central to this development is the introduction of a new notion of global simultaneity for accelerated particles--together with basic postulates concerning probability conservation and causality. The latter postulate is found to be violated by the Klein-Gordon equation, leading to its well-known problems as a single-particle theory. Various examples are considered, including the time evolution of a relativistic Gaussian wavepacket. ... This is the idea behind the Bohmian formulation of quantum mechanics [166,167], where a wave is guiding test-particles. One difficulty of this approach of "particle position" is that the equation above can only be solved if Ψ(q, t) is known, i.e., if the problem is already solved [168,169]. Another approach to Bohmian mechanics is to solve both the set of Eqs. 6.11a and 6.11b on the hydrodynamical fields ρ(q, t) and the action S(q, t) along with the equations of motion of the trajectories. ... ... From this point of view, Bohmian quantum trajectories seem to be confined to illustrative purposes. Efforts were made, however, in recent years [162,168,169,[201][202][203][204] in order to dispense with the knowledge of the density, and solve the quantum problem using trajectories only. In the framework, it is not necessary to refer explicitly to any auxiliary equation on the density on top of the equations of motion of the trajectories anymore. ... Thesis The dynamics of a quantum system of interacting particles rapidly becomes impossible to describe exactly when the number of particles increases. This is one of the main difficulties in the description of atomic nuclei, which may contain several hundred of nucleons. A simplified approach to the problem is to assume that some degrees of freedom contain more information than others. A classical approximation is to focus on one-body degrees of freedom: the dynamics of the system can be approximately described by a set of particles propagating in an effective mean-field. While the mean-field approximation has allowed many advances in the theoretical understanding of the properties of nuclei, it is still unable to describe certain of their properties, for example the effects of direct collisions between nucleons or the quantum fluctuations of one-body observables. The objective of the thesis is to account for these correlations beyond the mean-field approximation in order to improve the dynamical description of quantum correlated systems. One component of the thesis has been to study methods to treat collisions between particles by including the Born term beyond the mean-field. This term is particularly complex because of non-local effects in time, the so-called non-Markovian effects. Possible simplifications of this term have been studied for future applications. Two simplifying approaches have been proposed, one allowing to treat the collision term with master equations, the other allowing to get rid of time integrals while keeping the non-locality in time. The second part of the thesis was devoted to the improvement of the mean-field approximation in order to describe the quantum fluctuations. Based on existing phase space methods, a new method, called "Hybrid Phase Space Method" (HPS) has been proposed. This method is a combination of the mean-field theory with initial fluctuations and a theory where the two-body degrees of freedom are propagated explicitly. This new approach has been successfully tested for the description of an ensemble of fermions on a lattice, i.e. the Fermi-Hubbard model, and has given much better results than the phase-space approaches previously used to describe correlated systems, in particular in a weak coupling case. If this new approximation gives interesting results, it remains numerically rather heavy and empirical. This led to a detailed study of the Wigner-Weyl and Bohm formalisms in order to explore phase-space methods in a more systematic way. The notion of trajectory in quantum mechanics has been systematically investigated. The conclusion of this study, where illustrations have been made on the tunneling effect, is that it is necessary that the trajectories interfere with each other in the course of time to reproduce the quantum effects. ... We suggested that it might be possible to reproduce quantum phenomena without a universal wavefunction Ψ(q) (except to define initial conditions). In its place we postulated an enormous, but countable, ensemble X = x j : j of points x j in configuration space (similar ideas have been proposed earlier by a number of authors [32][33][34][35], but they considered a continuum of worlds, which, in our view, leads to some of the same conceptual issues that Everett's interpretation faces-see also [36]). Each point is a world-particle, just as Bohmian mechanics postulates, and the dynamics is intended to reproduce a deterministic Bohmian trajectory for each world-particle. ... ... Dealing with nodes is a problem in many quantum simulation methods based on Bohmian mechanics [41]. Nodes should not be a problem for interpretations involving a continuum of worlds [32][33][34][35], as they are formulated to be exactly equivalent to quantum mechanics. However, as remarked in Section 3, our view is that these interpretations do not solve the conceptual problems of the Everettian many-worlds interpretation. ... Article Full-text available “Locality” is a fraught word, even within the restricted context of Bell’s theorem. As one of us has argued elsewhere, that is partly because Bell himself used the word with different meanings at different stages in his career. The original, weaker, meaning for locality was in his 1964 theorem: that the choice of setting by one party could never affect the outcome of a measurement performed by a distant second party. The epitome of a quantum theory violating this weak notion of locality (and hence exhibiting a strong form of nonlocality) is Bohmian mechanics. Recently, a new approach to quantum mechanics, inspired by Bohmian mechanics, has been proposed: Many Interacting Worlds. While it is conceptually clear how the interaction between worlds can enable this strong nonlocality, technical problems in the theory have thus far prevented a proof by simulation. Here we report significant progress in tackling one of the most basic difficulties that needs to be overcome: correctly modelling wavefunctions with nodes. ... The present proposal is based on two recent but entirely independent developments in the foundations of quantum theory. The first is a number of 'trajectory-only' formulations of quantum theory [120,56,130,20,98,117,118], though also see ref. [60], intended to recover quantum mechanics without reference to a physical wavefunction. Two related approaches which however lead to an experimentally distinguishable theory are the realensemble formulations of refs. ... ... The approach of Schiff and Poirier [98,117] is different in that their formulation does not rely on the introduction of a quantum potential in the usual form. Instead, they consider higher-order time derivatives of the variables to appear in the expressions for the Lagrangian and energy of their theory. ... Article ] In this thesis we investigate a solution to the problem of time' in canonical quantum gravity by splitting spacetime into surfaces of constant mean curvature parameterised by York time. We argue that there are reasons to consider York time a viable candidate for a physically meaningful notion of time. We investigate a number York-time Hamiltonian-reduced cosmological models and explore some technical aspects, such as the non-canonical Poisson structure. We develop York-time Hamiltonian-reduced cosmological perturbation theory by solving the Hamiltonian constraint perturbatively around a homogeneous background for the physical (non-vanishing) Hamiltonian that is the momentum conjugate to the York time parameter. We proceed to canonically quantise the cosmological models and the perturbation theory and discuss a number of conceptual and technical points, such as volume eigenfunctions and the absence of a momentum representation due to the non-standard commutator structure. We propose an alternative, wavefunction-free method of quantisation based on an ensemble of trajectories and a dynamical configuration-space geometry and discuss its application to gravity. We conclude by placing the York-time theories explored in this thesis in the wider context of a search for a satisfactory theory of quantum gravity. ... A similar proposal was made in [6][7][8][9][10][11], where the idea of many interacting classical worlds (MIW) was introduced to explain quantum mechanics. This also posits that the quantum state refers to an ensemble of real, existing, systems which interact with each other, only those were posited to be near copies of our universe that all simultaneously exist 1 . ... ... The elements could be particles or events or subsystems and the relations could be relative position, relative distance, causal relations, etc. We proceed by defining the view of the i'th element, 1 If the ontology posited by the [6][7][8][9][10][11] papers may seem extravagant, their proposal had the virtue of a simple form for the inter-ensemble interactions. This inspired me to seek to use such a simple dynamics in the real ensemble idea. ... Article Full-text available Quantum mechanics is derived from the principle that the universe contain as much variety as possible, in the sense of maximizing the distinctiveness of each subsystem. The quantum state of a microscopic system is defined to correspond to an ensemble of subsystems of the universe with identical constituents and similar preparations and environments. A new kind of interaction is posited amongst such similar subsystems which acts to increase their distinctiveness, by extremizing the variety. In the limit of large numbers of similar subsystems this interaction is shown to give rise to Bohm's quantum potential. As a result the probability distribution for the ensemble is governed by the Schroedinger equation. The measurement problem is naturally and simply solved. Microscopic systems appear statistical because they are members of large ensembles of similar systems which interact non-locally. Macroscopic systems are unique, and are not members of any ensembles of similar systems. Consequently their collective coordinates may evolve deterministically. This proposal could be tested by constructing quantum devices from entangled states of a modest number of quits which, by its combinatorial complexity, can be expected to have no natural copies. ... Numerically, the prospect of stable, synthetic quantum trajectory calculations for many-D molecular applications will be fully explored, as the benefits here could prove profound. 4,7,11 Our formalism offers flexibility for restricting action extremization to trajectory ensembles of a desired form (e.g., reduced dimensions), thereby providing useful variational approximations. Our exact TDQM equations are PDEs, not single-trajectory ODEs-the entire ensemble must be determined at once. ... ... The other two constants determine which particular TIQM state the trajectory is associated with. A one-to-one correspondence thus exists between trajectory solutions [of Eq. (4)] and (scattering) TIQM states.7 ... Article Full-text available We present a self-contained formulation of spin-free nonrelativistic quantum mechanics that makes no use of wavefunctions or complex amplitudes of any kind. Quantum states are represented as ensembles of real-valued quantum trajectories, obtained by extremizing an action and satisfying energy conservation. The theory applies for arbitrary configuration spaces and system dimensionalities. Various beneficial ramifications - theoretical, computational, and interpretational - are discussed. ... The first one is based on the adoption of deterministic Lagrangian trajectories g L (s), s ∈ I , or Lagrangian-Paths (LP). This is analogous to the customary literature approach previously adopted in the context of the Bohmian representation of non-relativistic QM [26][27][28][29][30][31]. ... Article Full-text available The logical structure of quantum gravity (QG) is addressed in the framework of the so-called manifestly covariant approach. This permits to display its close analogy with the logics of quantum mechanics (QM). More precisely, in QG the conventional 2-way principle of non-contradiction (2-way PNC) holding in Classical Mechanics is shown to be replaced by a 3-way principle (3-way PNC). The third state of logical truth corresponds to quantum indeterminacy/undecidability, i.e., the occurrence of quantum observables with infinite standard deviation. The same principle coincides, incidentally, with the earlier one shown to hold in Part I, in analogous circumstances, for QM. However, this conclusion is found to apply only provided a well-defined manifestly-covariant theory of the gravitational field is adopted both at the classical and quantum levels. Such a choice is crucial. In fact it makes possible the canonical quantization of the underlying unconstrained Hamiltonian structure of general relativity, according to an approach recently developed by Cremaschini and Tessarotto (2015–2021). Remarkably, in the semiclassical limit of the theory, Classical Logic is proved to be correctly restored, together with the validity of the conventional 2-way principle. ... Based on Ref. [5], one can show that a stochastic-trajectory formulation of NRQM, which is ontologically equivalent to NRQM, can be achieved in the framework of the so-called Generalized Lagrangian Path (GLP) representation, i.e., the parametrization of the quantum wave-function and hence of the SE itself in terms of suitable stochastic GLP trajectories. The GLP-representation builds on the Bohmian representation of NRQM, well-known in the literature [3,[37][38][39][40][41], i.e., based on the notion of Lagrangian Path (LP). Hence, while the latter is based on the LP-representation, namely in terms of the LP, i.e., a unique (namely deterministic) solution of the initial-value problem (13), the GLP-representation relies, instead, on the notion of the so-called generalized Lagrangian path (GLP), i.e., a suitable family of stochastic configuration-space trajectories. ... Article Full-text available One of the most challenging and fascinating issue in mathematical and theoretical physics concerns the possibility of identifying the logic underlying the so-called quantum universe, i.e., Quantum Mechanics and Quantum Gravity. Besides the sheer difficulty of the problem, inherent in the actual formulation of Quantum Mechanics—and especially of Quantum Gravity—to be used for such a task, a crucial aspect lies in the identification of the appropriate axiomatic logical proposition calculus to be associated to such theories. In this paper the issue of the validity of the conventional principle of non-contradiction (PNC) is called into question and is investigated in the context of non-relativistic Quantum Mechanics. In the same framework a modified form of the principle, denoted as 3-way PNC is shown to apply, which relates the axioms of quantum logic with the physical requirements placed by the Heisenberg Indeterminacy Principle. ... There are some similarities between the present model and the work of Madelung [25], and also various works on many-interactingworlds [26,27,28,29,30,31] for a single quantum particles. ... Preprint Full-text available A local theory of relativistic quantum physics in space-time, which makes all of the same empirical predictions as the conventional delocalized theory in configuration space, is presented and interpreted. Each physical system is characterized by a set of indexed piece-wise single-particle wavefunctions in space-time, each with with its own coefficient, and these 'wave-fields' replace entangled states in higher-dimensional spaces. Each wavefunction of a fundamental system describes the motion of a portion of a conserved fluid in space-time, with the fluid decomposing into many classical point particles, each following a world-line and recording a local memory. Local interactions between two systems take the form of local boundary conditions between the differently indexed pieces of those systems' wave-fields, with new indexes encoding each orthogonal outcome of the interaction. The general machinery is introduced, including the local mechanisms for entanglement and interference. The experience of collapse, Born rule probability, and environmental decoherence are discussed. A number of illustrative examples are given, including a Von Neumann measurement, and a test of Bell's theorem. ... The second, "quantum mechanics without wavefunctions" approach is also useful, as it enables the quantum wave to be discarded entirely [23][24][25][26][27][28][29][30][31][32]. Instead, the ensemble of quantum trajectories is used to propagate all quantum information on its own, exactly. ... Article We re-examine the (inverse) Fermi accelerator problem by resorting to a quantum trajectory description of the dynamics. Quantum trajectories are generated from the time-independent Schrödinger equation solutions, using a unipolar treatment for the (light) confined particle and a bipolar treatment for the (heavy) movable wall. Analytic results are presented for the exact coupled two-dimensional problem, as well as for the adiabatic and mixed quantum-classical approximations. ... Throughout recent years, apart from our own model, several approaches to a quantum mechanics without wavefunctions have been proposed [1][2][3][4][5]. These refer to "many classical worlds" that provide Bohm-type trajectories with certain repulsion effects. ... Article Full-text available In the quest for an understanding of nonlocality with respect to an appropriate ontology, we propose a "cosmological solution". We assume that from the beginning of the universe each point in space has been the location of a scalar field representing a zero-point vacuum energy that nonlocally vibrates at a vast range of different frequencies across the whole universe. A quantum, then, is a nonequilibrium steady state in the form of a "bouncer" coupled resonantly to one of those (particle type dependent) frequencies, in remote analogy to the bouncing oil drops on an oscillating oil bath as in Couder's experiments. A major difference to the latter analogy is given by the nonlocal nature of the vacuum oscillations. We show with the examples of double- and $n$-slit interference that the assumed nonlocality of the distribution functions alone suffices to derive the de Broglie-Bohm guiding equation for $N$ particles with otherwise purely classical means. In our model, no influences from configuration space are required, as everything can be described in 3-space. Importantly, the setting up of an experimental arrangement limits and shapes the forward and osmotic contributions and is described as vacuum landscaping. ... For the tasks indicated above, in close similarity with non-relativistic quantum mechanics (see [22,23]), two choices are in principle available. The first one is based on the introduction of deterministic Lagrangian trajectories {g(s), s ∈ I}, or Lagrangian-Paths (LP), analogous to those adopted in the context of the Bohmian representation of non-relativistic quantum mechanics [24][25][26][27][28][29][30][31]. ... Article Full-text available A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The result is established in the framework of the manifestly-covariant quantum gravity theory (CQG-theory) proposed recently and the related CQG-wave equation advancing in proper-time the quantum state associated with massive gravitons. Generally non-stationary analytical solutions for the CQG-wave equation with non-vanishing cosmological constant are determined in such a framework, which exhibit Gaussian-like probability densities that are non-dispersive in proper-time. As a remarkable outcome of the theory achieved by implementing these analytical solutions, the existence of an emergent gravity phenomenon is proven to hold. Accordingly, it is shown that a mean-field background space-time metric tensor can be expressed in terms of a suitable statistical average of stochastic fluctuations of the quantum gravitational field whose quantum-wave dynamics is described by GLP trajectories. ... Several discrete and continuous versions of this approach have been proposed in the literature [6][7][8][9][10][11][12][13]. Due the possible interpretation of Q (i) t , i = 1, . . . ... Article Recently the Many-Interacting-Worlds (MIW) approach to a quantum theory without wave functions was proposed. This approach leads quite naturally to numerical integrators of the Schr\"odinger equation. It has been suggested that such integrators may feature advantages over fixed-grid methods for higher numbers of degrees of freedom. However, as yet, little is known about concrete MIW models for more than one spatial dimension and/or more than one particle. In this work we develop the MIW approach further to treat arbitrary degrees of freedom, and provide a systematic study of a corresponding numerical implementation for computing one-particle ground and excited states in one dimension, and ground states in two spatial dimensions. With this step towards the treatment of higher degrees of freedom we hope to stimulate their further study. ... Here it is used to provide alternative ways to inspect or probe the quantum systems. Also, the alternative formulation might lead to possibly more efficient methods to perform computer simulations of quantum systems [15,16]. However, Bohm's formulation has not gained general acceptance (yet) and the Copenhagen interpretation remains to be favored by the major-addressing both perturbative and non-perturbative phenomena. ... Article Full-text available The formulation of quantum mechanics developed by Bohm, which can generate well-defined trajectories for the underlying particles in the theory, can equally well be applied to relativistic Quantum Field Theories to generate dynamics for the underlying fields. However, it does not produce trajectories for the particles associated with these fields. Bell has shown that an extension of Bohm's approach can be used to provide dynamics for the fermionic occupation numbers in a relativistic Quantum Field Theory. In the present paper, Bell's formulation is adopted and elaborated on, with a full account of all technical detail required to apply his approach to a bosonic quantum field theory on a lattice. This allows an explicit computation of (stochastic) trajectories for massive and massless particles in this theory. Also particle creation and annihilation, and their impact on particle propagation, is illustrated using this model. ... Similarly, the Gaussian-based timedependent variational principle [46,47] yields classical-like equations of motion. Alternatively, Schiff and Poirier [48] build an effective Lagrangian method that contains higherorder derivatives, which in turn yields classical-looking equations with extra degrees of freedom [49]. Quantum Statistical Potentials (QSPs) [50][51][52] and empirical potentials for molecular systems [53] are purely classical in their form, with effective potentials; many of these methods have been reviewed elsewhere [54]. ... Article Full-text available Effective classical dynamics provide a potentially powerful avenue for modeling large-scale dynamical quantum systems. We have examined the accuracy of a Hamiltonian-based approach that employs effective momentum-dependent potentials (MDPs) within a molecular-dynamics framework through studies of atomic ground states, excited states, ionization energies, and scattering properties of continuum states. Working exclusively with the Kirschbaum-Wilets (KW) formulation with empirical MDPs [C. L. Kirschbaum and L. Wilets, Phys. Rev. A 21, 834 (1980)], optimization leads to very accurate ground-state energies for several elements (e.g., N, F, Ne, Al, S, Ar, and Ca) relative to Hartree-Fock values. The KW MDP parameters obtained are found to be correlated, thereby revealing some degree of transferability in the empirically determined parameters. We have studied excited-state orbits of electron-ion pair to analyze the consequences of the MDP on the classical Coulomb catastrophe. From the optimized ground-state energies, we find that the experimental first-and second-ionization energies are fairly well predicted. Finally, electron-ion scattering was examined by comparing the predicted momentum transfer cross section to a semiclassical phase-shift calculation; optimizing the MDP parameters for the scattering process yielded rather poor results, suggesting a limitation of the use of the KW MDPs for plasmas. ... Here I examine the problem of explaining the symmetry dichotomy within two interpretations of quantum mechanics which clarify the connection between particles and the wave function by including particles following definite trajectories through space in addition to, or in lieu of, the wave function: (1) Bohmian mechanics and (2) a hydrodynamic interpretation that posits a multitude of quantum worlds interacting with one another, which I have called "Newtonian quantum mechanics" (Hall et al. , 2014 have called this kind of approach "many interacting worlds"). Versions of this second interpretation have recently been put forward by Tipler (2006); Poirier (2010); Schiff & Poirier (2012); Boström (2012); Boström (2015); Hall et al. (2014); Sebens (2015); it builds on the hydrodynamic approach to quantum mechanics (see Madelung, 1927;Wyatt, 2005;Holland, 2005). Bohmian mechanics and Newtonian quantum mechanics are often called "interpretations" of quantum mechanics, but should really be thought of as distinct physical theories which seek to explain the same body of data (those experiments whose statistics are successfully predicted by the standard methods of non-relativistic quantum mechanics). ... Article I address the problem of explaining why wave functions for identical particles must be either symmetric or antisymmetric (the symmetry dichotomy) within two interpretations of quantum mechanics which include particles following definite trajectories in addition to, or in lieu of, the wave function: Bohmian mechanics and Newtonian quantum mechanics (a.k.a. many interacting worlds). In both cases I argue that, if the interpretation is formulated properly, the symmetry dichotomy can be derived and need not be postulated. ... It follows Pauli exclusion principle [13][14][15][16][17]. Even though Pauli exclusion principle is mainly for wavefunctions [18][19][20][21][22][23][24][25][26][27][28][29][30], the physical mass waves are used as above to discuss the principle. ... Article Full-text available The paper "Unified Field Theory and the Configuration of Particles" opened a new chapter of physics. One of the predictions of the paper is that a proton has an octahedron shape. As Physics progresses, it focuses more on invisible particles and the unreachable grand universe as visible matter is studied theoretically and experimentally. The shape of invisible proton has great impact on the topology of atom. Electron orbits, electron binding energy, Madelung Rules, and Zeeman splitting, are associated with proton’s octahedron shape and three nuclear structural axes. An element will be chemically stable if the outmost s and p clouds have eight electrons which make atom a symmetrical cubic. ... It follows Pauli exclusion principle [13][14][15][16][17]. Even though Pauli exclusion principle is mainly for wavefunctions [18][19][20][21][22][23][24][25][26][27][28][29][30], the physical mass waves are used as above to discuss the principle. ... Article Full-text available The paper "Unified Field Theory and the Configuration of Particles" opened a new chapter of physics. One of the predictions of the paper is that a proton has an octahedron shape. As Physics progresses, it focuses more on invisible particles and the unreachable grand universe as visible matter is studied theoretically and experimentally. The shape of invisible proton has great impact on the topology of atom. Electron orbits, electron binding energy, Madelung Rules, and Zeeman splitting, are associated with proton’s octahedron shape and three nuclear structural axes. An element will be chemically stable if the outmost s and p clouds have eight electrons which make atom a symmetrical cubic. ... Also, PWT could be compatible with modications such as nonlinear Schrödinger equations − should they ever become necessary. Interestingly, in the recent decades, the benets of PWT for a numerical implementation and visualisation of quantum processes have also been realised [51], [17], [38], [34]. ... Article Pilot wave theory (PWT), also called de Broglie-Bohm theory or Bohmian Mechanics, is a deterministic nonlocal hidden variables' quantum theory without fundamental uncertainty. It is in agreement with all experimental facts about nonrelativistic quantum mechanics (QM) and furthermore explains its mathematical structure. But in general, PWT describes a nonequilibrium state, admitting new physics beyond standard QM. This essay is concerned with the problem how to generalise the PWT approach to quantum field theories (QFTs). First, we briefy state the formulation of nonrelativistic PWT and review its major results. We work out the parts of its structure that it shares with the QFT case. Next, we come to the main part: We show how PW QFTs can be constructed both for field and particle ontologies. In this context, we discuss some of the existing models as well as general issues: most importantly the status of Lorentz invariance in the context of quantum nonlocality. The essay concludes with a more speculative outlook in which the potential of PWT for open QFT questions as well as quantum nonequilibrium physics are considered. ... Our interest in studying the explicit model (1.1) is that rigorous investigation of its limiting behavior becomes feasible. Both Hall et al. (2014) and Sebens (2014) noted the ontological difficulty of a continuum of worlds, a feature of an earlier but closely related hydrodynamical approach due to Holland (2005), Poirier (2010) and Schiff and Poirier (2012). ... Article From its beginning, there have been attempts by physicists to formulate quantum mechanics without requiring the use of wave functions. An interesting recent approach takes the point of view that quantum effects arise solely from the interaction of finitely many classical "worlds." The wave function is then recovered (as a secondary object) from observations of particles in these worlds, without knowing the world from which any particular observation originates. Hall, Deckert and Wiseman [Physical Review X 4 (2014) 041013] have introduced an explicit many-interacting-worlds harmonic oscillator model to provide support for this approach. In this note we provide a proof of their claim that the particle configuration is asymptotically Gaussian, thus matching the ground-state solution of Schrodinger's equation when the number of worlds goes to infinity. ... It follows Pauli exclusion principle [13][14][15][16][17]. Even though Pauli exclusion principle is mainly for wavefunctions [18][19][20][21][22][23][24][25][26][27][28][29][30], the physical mass waves are used as above to discuss the principle. ... Article Full-text available The paper "Unified Field Theory and the Configuration of Particles" opened a new chapter of physics. One of the predictions of the paper is that a proton has an octahedron shape. As Physics progresses, it focuses more on invisible particles and the unreachable grand universe as visible matter is studied theoretically and experimentally. The shape of invisible proton has great impact on the topology of atom. Electron orbits, electron binding energy, Madelung Rules, and Zeeman splitting, are associated with proton's octahedron shape and three nuclear structural axes. An element will be chemically stable if the outmost s and p clouds have eight electrons which make atom a symmetrical cubic. ... Newtonian QM is somewhat similar to Böstrom's (2012) metaworld theory 1 and the proposal in Tipler (2006). Other ideas about how to remove the wave function are explored in Poirier (2010); Schiff & Poirier (2012), including an intimation of many worlds. ... Article Here I explore a novel no-collapse interpretation of quantum mechanics that combines aspects of two familiar and well-developed alternatives, Bohmian mechanics and the many-worlds interpretation. Despite reproducing the empirical predictions of quantum mechanics, the theory looks surprisingly classical. All there is at the fundamental level are particles interacting via Newtonian forces. There is no wave function. However, there are many worlds. © 2015 by the Philosophy of Science Association. All rights reserved. Preprint Full-text available Since the 1950s mathematical physicists have been working on the construction of a formal mathematical foundation for relativistic quantum theory. In the literature the view that the axiomatization of the subject is primarily a mathematical problem has been prevalent. This view, however, implicitly asserts that said axiomatization can be achieved without readdressing the basic concepts of quantum theory-an assertion that becomes more implausible the longer the debate on the conceptual foundations of quantum mechanics itself continues. In this work we suggest a new approach to the above problem, which views the non-relativistic theory from a purely statistical perspective: to generalize the quantum-mechanical Born rule for particle position probability to the general-relativistic setting. The advantages of this approach are that one obtains a statistical theory from the onset and that it is independent of any particular dynamical models and the symmetries of Minkowski spacetime. Here we develop the smooth 1-body generalization, based on prior contributions mainly due to C. Eckart and J. Ehlers. This generalization respects the general principle of relativity and exposes the assumptions of spacelikeness of the hypersurface and global hyperbolicity of the spacetime as obsolete. We discuss two distinct formulations of the theory, which, borrowing terminology from the non-relativistic analog, we term the Lagrangian and Eulerian pictures. Though the development of the former one is the main contribution of this work, under these general conditions neither one of the two has received such a comprehensive treatment in the literature before. The Lagrangian picture also opens up a potentially viable path towards the many-body generalization. We further provide a simple example in which the number of bodies is not conserved. Readers interested in the theory of the general-relativistic continuity equation will also find this work to be of value. Article Among the numerous concepts of time in quantum scattering, Smith's dwell time (Smith, 1960 [7]) and Eisenbud & Wigner's time delay (Wigner, 1955 [12]) are the most well established. The dwell time represents the amount of time spent by the particle inside a given coordinate range (typically a potential barrier interaction region), while the time delay measures the excess time spent in the interaction region because of the potential. In this paper, we use the exact trajectory-ensemble reformulation of quantum mechanics, recently proposed by one of the authors (Poirier), to study how tunneling and reflection unfold over time, in a one-dimensional rectangular potential barrier. Among other dynamical details, the quantum trajectory approach provides an extremely robust, accurate, and straightforward method for directly computing the dwell time and time delay, from a single quantum trajectory. The resultant numerical method is highly efficient, and in the case of the time delay, completely obviates the traditional need to energy-differentiate the scattering phase shift. In particular, the trajectory variables provide a simple expression for the time delay that disentangles the contribution of the self-interference delay. More generally, quantum trajectories provide interesting physical insight into the tunneling process. Article Bohmian mechanics is an alternative to standard quantum mechanics that does not suffer from the measurement problem. While it agrees with standard quantum mechanics concerning its experimental predictions, it offers novel types of approximations not suggested by the latter. Of particular interest are semi-classical approximations, where part of the system is treated classically. Bohmian semi-classical approximations have been explored before for systems without electromagnetic interactions. Here, the Rabi model is considered as a simple model involving light-matter interaction. This model describes a single mode electromagnetic field interacting with a two-level atom. As is well-known, the quantum treatment and the semi-classical treatment (where the field is treated classically rather than quantum mechanically) give qualitatively different results. We analyze the Rabi model using a different semi-classical approximation based on Bohmian mechanics. In this approximation, the back-reaction from the two-level atom onto the classical field is mediated by the Bohmian configuration of the two-level atom. We find that the Bohmian semi-classical approximation gives results comparable to the usual mean field one for the transition between ground and first excited state. Both semi-classical approximations tend to reproduce the collapse of the population inversion, but fail to reproduce the revival, which is characteristic of the full quantum description. Also an example of a higher excited state is presented where the Bohmian approximation does not perform so well. Preprint A recent article has treated the question of how to generalize the Born rule from non-relativistic quantum theory to curved spacetimes (Lienert and Tumulka, Lett. Math. Phys. 110, 753 (2019)). The supposed generalization originated in prior works on 'hypersurface Bohm-Dirac models' as well as approaches to relativistic quantum theory developed by Bohm and Hiley. In this comment, we raise three objections to the rule and the broader theory in which it is embedded. In particular, to address the underlying assertion that the Born rule is naturally formulated on a spacelike hypersurface, we provide an analytic example showing that a spacelike hypersurface need not remain spacelike under proper time evolution -- even in the absence of curvature. We finish by proposing an alternative curved Born rule' for the one-body case, which overcomes these objections, and in this instance indeed generalizes the one Lienert and Tumulka attempted to justify. Our approach can be generalized to the many-body case, and we expect it to be also of relevance for the general case of a varying number of bodies. Article The quantum dynamics of vibrational predissociation of the Ar ⋯Br2 triatomic molecule is described within a trajectory-based framework. The Br2 stretching mode is mapped into a set of classical (coupled) harmonic oscillators, associated to each vibrational state of the diatomic molecule. The time evolution of the molecular wave packet along the dissociation coordinate is described within the hydrodynamical formulation of quantum mechanics, specifically using the interacting trajectory representation. The relatively small number of interacting trajectories required to attain numerical convergence (N=100), makes the present model very appealing in comparison with other trajectory-based methods. The underlying parameterisation of the density was found to represent accurately the evolution of the projection of the molecular wave packet along the van der Waals mode, from the ground vibrational state into the continuum. The computed lifetime of the predissociating level and the population dynamics are in very good agreement with those observed experimentally. Article Confined systems often exhibit unusual behavior regarding their structure, stability, reactivity, bonding, interactions, and dynamics. Quantization is a direct consequence of confinement. Confinement modifies the electronic energy levels, orbitals, electronic shell filling, etc. of a system, thereby affecting its reactivity as well as various response properties as compared to the cases of corresponding unconfined systems. Confinement may force two rare gas atoms to form a partly covalent bond. Gas storage is facilitated through confinement and unprecedented optoelectronic properties are observed in certain cases. Some slow reactions get highly accelerated in an appropriate confined environment. In the current Feature Article we analyze these aspects with a special emphasis on the work done by our research group. Article A methodology of quantum dynamics based on interacting trajectories without reference to any wave function is applied to ultrashort laser ionization of a model hydrogen atom. The pulses are chosen to be so short that the relative phase between the carrier wave and the pulse envelope becomes important. As main results we show that the trajectory-only approach is capable of correctly describing the large amplitude motion and energetics of the laser driven electron and of reproducing carrier-envelope effects onto the photoelectron spectra. It also provides an intuitive picture of the dynamical quantum processes involved. Article In this paper a trajectory-based relativistic quantum wave equation is established for extended charged spinless particles subject to the action of the electromagnetic (EM) radiation-reaction (RR) interaction. The quantization pertains the particle dynamics, in which both the external and self EM fields are treated classically. The new equation proposed here is referred to as the RR quantum wave equation. This is shown to be an evolution equation for a complex scalar quantum wave function and to be realized by a first-order PDE with respect to a quantum proper time s. The latter is uniquely prescribed by representing the RR quantum wave equation in terms of the corresponding quantum hydrodynamic equations and introducing a parametrization in terms of Lagrangian paths associated with the quantum fluid velocity. Besides the explicit proper time dependence, the theory developed here exhibits a number of additional notable features. First, the wave equation is variational and is consistent with the principle of manifest covariance. Second, it permits the definition of a strictly positive 4-scalar quantum probability density on the Minkowski space-time, in terms of which a flow-invariant probability measure is established. Third, the wave equation is non-local, due to the characteristic EM RR retarded interaction. Fourth, the RR wave equation recovers the Schrödinger equation in the non-relativistic limit and the customary Klein-Gordon wave equation when the EM RR is negligible or null. Finally, the consistency with the classical RR Hamilton-Jacobi equation is established in the semi-classical limit. © 2015, Società Italiana di Fisica and Springer-Verlag Berlin Heidelberg. Article The complex quantum Hamilton–Jacobi equation for the complex action is approximately solved by propagating individual Bohmian trajectories in real space. Equations of motion for the complex action and its spatial derivatives are derived through use of the derivative propagation method. We transform these equations into the arbitrary Lagrangian–Eulerian version with the grid velocity matching the flow velocity of the probability fluid. Setting higher-order derivatives equal to zero, we obtain a truncated system of equations of motion describing the rate of change in the complex action and its spatial derivatives transported along approximate Bohmian trajectories. A set of test trajectories is propagated to determine appropriate initial positions for transmitted trajectories. Computational results for transmitted wave packets and transmission probabilities are presented and analyzed for a one-dimensional Eckart barrier and a two-dimensional system involving either a thick or thin Eckart barrier along the reaction coordinate coupled to a harmonic oscillator. Article Full-text available The paper "Unified field theory" (UFT) unified four fundamental forces with help of the Torque model. UFT gives a new definition of Physics: “A natural science that involves the study of motion of space-time-energy-force to explain and predict the motion, interaction and configuration of matter.” One of important pieces of matter is the atom. Unfortunately, the configuration of an atom cannot be visually observed. Two of the important accepted theories are the Pauli Exclusion Principle and the Schrodinger equations. In these two theories, the electron configuration is studied. Contrary to the top down approach, UFT theory starts from structure of Proton and Neutron using bottom up approach instead. Interestingly, electron orbits, electron binding energy, Madelung Rules, Zeeman splitting and crystal structure of the metals, are associated with proton’s octahedron shape and three nuclear structural axes. An element will be chemically stable if the outmost s and p orbits have eight electrons which make atom a symmetrical cubic. Most importantly, the predictions of atomic configurations in this paper can be validated by characteristics of chemical elements which make the UFT claims credible. UFT comes a long way from space-time-energy-force to the atom. The conclusions of UFT are more precise and clearer than the existing theories that have no proper explanation regarding many rules, such as eight outer electrons make element chemically stable and the exception on Madelung's rules. Regardless of the imperfections of the existing atomic theories, many particle Physics theories have no choice but to build on top of atomic theories, mainly Pauli Exclusion Principle and Schrodinger equations. Physics starts to look for answer via ambiguous mathematical equations as the proper clues are missing. Physics issues are different from mathematical issues, as they are Physical. Pauli Exclusion works well in electron configuration under specific physical condition and it is not a general Physics principal. Schrodinger’s mathematical equations are interpreted differently in UFT. UFT is more physical as it built itself mainly on concept of Space, Time, Energy and Force, in the other word, UFT is Physics itself. Theory of Everything (ToE), the final theory of the Physics, can be simply another name for UFT. This paper connects an additional dot to draw UFT closer to ToE. Article A fast and robust time-independent method to calculate thermal rate constants in the deep resonant tunneling regime for scattering reactions is presented. The method is based on the calculation of the cumulative reaction probability which, once integrated, gives the thermal rate constant. We tested our method with both continuous (single and double Eckart barriers) and discontinuous first derivative potentials (single and double rectangular barriers). Our results show that the presented method is robust enough to deal with extreme resonating conditions such as multiple barrier potentials. Finally, the calculation of the thermal rate constant for double Eckart potentials with several quasi-bound states and the comparison with the time-independent log-derivative method are reported. An implementation of the method using the Mathematica Suite is included in the Supporting Information. © 2013 Wiley Periodicals, Inc. Article Full-text available If the classical structure of space-time is assumed to define an a priori scenario for the formulation of quantum theory (QT), the coordinate representation of the solutions $$\psi (\vec x,t)(\psi (\vec x_1 , \ldots ,\vec x_N ,t))$$ of the Schroedinger equation of a quantum system containing one (N) massive scalar particle has a preferred status. Let us consider all of the solutions admitting a multipolar expansion of the probability density function $$\rho (\vec x,t) = \left| {(\psi (\vec x,t)} \right|^2$$ (and more generally of the Wigner function) around a space-time trajectory $$\vec x_c (t)$$ to be properly selected. For every normalized solution $$\left( {\smallint d^3 x\rho (\vec x,t) = 1} \right)$$ there is a privileged trajectory implying the vanishing of the dipole moment of the multipolar expansion: it is given by the expectation value of the position operator $$\left\langle {\psi (t)\left| {\hat \vec x} \right|\psi (t)} \right\rangle = \vec x_c (t)$$. Then, the special subset of solutions $$\psi (\vec x,t)$$ which satisfy Ehrenfest’s Theorem (named thereby Ehrenfest monopole wave functions (EMWF)), have the important property that this privileged classical trajectory $$\vec x_c (t)$$ is determined by a closed Newtonian equation of motion where the effective force is the Newtonian force plus non-Newtonian terms (of order ħ 2 or higher) depending on the higher multipoles of the probability distribution ρ. Note that the superposition of two EMWFs is not an EMWF, a result to be strongly hoped for, given the possible unwanted implications concerning classical spatial perception. These results can be extended to N-particle systems in such a way that, when N classical trajectories with all the dipole moments vanishing and satisfying Ehrenfest theorem are associated with the normalized wave functions of the N-body system, we get a natural transition from the 3N-dimensional configuration space to the space-time. Moreover, these results can be extended to relativistic quantum mechanics. Consequently, in suitable states of N quantum particle which are EMWF, we get the “emergence” of corresponding “classical particles” following Newton-like trajectories in space-time. Note that all this holds true in the standard framework of quantum mechanics, i.e. assuming, in particular, the validity of Born’s rule and the individual system interpretation of the wave function (no ensemble interpretation). These results are valid without any approximation (like ħ → 0, big quantum numbers, etc.). Moreover, we do not commit ourselves to any specific ontological interpretation of quantum theory (such as, e.g., the Bohmian one). We will argue that, in substantial agreement with Bohr’s viewpoint, the macroscopic description of the preparation, certain intermediate steps and the detection of the final outcome of experiments involving massive particles are dominated by these classical “effective” trajectories. This approach can be applied to the point of view of de-coherence in the case of a diagonal reduced density matrix ρ red (an improper mixture) depending on the position variables of a massive particle and of a pointer. When both the particle and the pointer wave functions appearing in ρ red are EMWF, the expectation value of the particle and pointer position variables becomes a statistical average on a classical ensemble. In these cases an improper quantum mixture becomes a classical statistical one, thus providing a particular answer to an open problem of de-coherence about the emergence of classicality. Article Full-text available This chapter provides a comprehensive overview of the Bohmian formulation of quantum mechanics. It starts with a historical review of the difficulties found by Louis de Broglie, David Bohm, and John S. Bell to convince the scientific community about the validity and utility of Bohmian mechanics. Then, a formal explanation of Bohmian mechanics for nonrelativistic, single-particle quantum systems is presented. The generalization to many-particle systems, where the exchange interaction and the spin play an important role, is also presented. After that, the measurement process in Bohmian mechanics is discussed. It is emphasized that Bohmian mechanics exactly reproduces the mean value and temporal and spatial correlations obtained from the standard, that is the Copenhagen or orthodox, formulation. The ontological characteristics of Bohmian mechanics provide a description of measurements as another type of interaction without the need for introducing the wave function collapse. Several solved problems are presented at the end of the chapter, giving additional mathematical support to some particular issues. A detailed description of computational algorithms to obtain Bohmian trajectories from the numerical solution of the Schrodinger or the Hamilton-Jacobi equations are presented in an appendix. The motivation of this chapter is twofold: first, as a didactic introduction to Bohmian formalism, which is used in the subsequent chapters, and second, as a self-contained summary for any newcomer interested in using Bohmian mechanics in his or her daily research activity. Article Full-text available We analyze the attosecond electron dynamics in hydrogen molecular ion driven by an external intense laser field using the Bohmian trajectories. To this end, we employ a one-dimensional model of the molecular ion in which the motion of the protons is frozen. The Bohmian trajectories clearly visualize the electron transfer between the two protons in the field and, in particular, confirm the recently predicted attosecond transient localization of the electron at one of the protons and the related multiple bunches of the ionization current within a half cycle of the laser field. Further analysis based on the quantum trajectories shows that the electron dynamics in the molecular ion can be understood via the phase difference accumulated between the Coulomb wells at the two protons. Article Full-text available This paper explores the quantum fluid dynamical (QFD) representation of the time-dependent Schrödinger equation for the motion of a wave packet in a high dimensional space. A novel alternating direction technique is utilized to single out each of the many dimensions in the QFD equations. This technique is used to solve the continuity equation for the density and the equation for the convection of the flux for the quantum particle. The ability of the present scheme to efficiently and accurately describe the dynamics of a quantum particle is demonstrated in four dimensions where analytical results are known. We also apply the technique to the photodissociation of NOCl and NO2 where the systems are reduced to two coordinates by freezing the angular variable at its equilibrium value. Article Full-text available In this paper we establish three variational principles that provide new foundations for Nelson’s stochastic mechanics in the case of nonrelativistic particles without spin. The resulting variational picture is much richer and of a different nature with respect to the one previously considered in the literature. We first develop two stochastic variational principles whose Hamilton–Jacobi‐like equations are precisely the two coupled partial differential equations that are obtained from the Schrödinger equation (Madelung equations). The two problems are zero‐sum, noncooperative, stochastic differential games that are familiar in the control theory literature. They are solved here by means of a new, absolutely elementary method based on Lagrange functionals. For both games the saddle‐point equilibrium solution is given by the Nelson’s process and the optimal controls for the two competing players are precisely Nelson’s current velocity v and osmotic velocity u, respectively. The first variational principle includes as special cases both the Guerra–Morato variational principle [Phys. Rev. D 27, 1774 (1983)] and Schrödinger original variational derivation of the time‐independent equation. Article Full-text available Diffraction and interference of matter waves are key phenomena in quantum mechanics. Here we present some results on particle diffraction in a wide variety of situations, ranging from simple slit experiments to more complicated cases such as atom scattering by corrugated metal surfaces and metal surfaces with simple and isolated adsorbates. The principal novelty of our study is the use of the so-called Bohmian formalism of quantum trajectories. These trajectories are able to satisfactorily reproduce the main features of the experimental results and, more importantly, they provide a causal intuitive interpretation of the underlying dynamics. In particular, we will focus our attention on: (a) a revision of the concepts of near and far field in undulatory optics; (b) the transition to the classical limit, where it is found that although the quantum and classical diffraction patterns tend to be quite similar, some quantum features are maintained even when the quantum potential goes to zero; and (c) a qualitative description of the scattering of atoms by metal surfaces in the presence of a single adsorbate. Article Full-text available Nine formulations of nonrelativistic quantum mechanics are reviewed. These are the wavefunction, matrix, path integral, phase space, density matrix, second quantization, variational, pilot wave, and Hamilton–Jacobi formulations. Also mentioned are the many-worlds and transactional interpretations. The various formulations differ dramatically in mathematical and conceptual overview, yet each one makes identical predictions for all experimental results. © 2002 American Association of Physics Teachers. Article Full-text available The method of quantum trajectories proposed by de Broglie and Bohm is applied to the study of atom diffraction by surfaces. As an example, a realistic model for the scattering of He off corrugated Cu is consid-ered. In this way, the final angular distribution of trajectories is obtained by box counting, which is in excellent agreement with the results calculated by standard S matrix methods of scattering theory. More interestingly, the accumulation of quantum trajectories at the different diffraction peaks is explained in terms of the corre-sponding quantum potential. This nonlocal potential ''guides'' the trajectories causing a transition from a distribution near the surface, which reproduces its shape, to the final diffraction pattern observed in the asymptotic region, far from the diffracting object. These two regimes are homologous to the Fresnel and Fraunhofer regions described in undulatory optics. Finally, the turning points of the quantum trajectories provide a better description of the surface electronic density than the corresponding classical ones, usually employed for this task. Article Full-text available This is the first part of what will be a two-part review of distribution functions in physics. Here we deal with fundamentals and the second part will deal with applications. We discuss in detail the properties of the distribution function defined earlier by one of us (EPW) and we derive some new results. Next, we treat various other distribution functions. Among the latter we emphasize the so-called P distribution, as well as the generalized P distribution, because of their importance in quantum optics. Article Full-text available In previous articles [J. Chem. Phys. 121, 4501 (2004); J. Chem. Phys. 124, 034115 (2006); J. Chem. Phys. 124, 034116 (2006); J. Phys. Chem. A 111, 10400 (2007); J. Chem. Phys. 128, 164115 (2008)] an exact quantum, bipolar wave decomposition, psi=psi(+)+psi(-), was presented for one-dimensional stationary state and time-dependent wavepacket dynamics calculations, such that the components psi(+/-) approach their semiclassical WKB analogs in the large action limit. The corresponding bipolar quantum trajectories are classical-like and well behaved, even when psi has many nodes or is wildly oscillatory. In this paper, both the stationary state and wavepacket dynamics theories are generalized for multidimensional systems and applied to several benchmark problems, including collinear H+H(2). Article The usual interpretation of the quantum theory is self-consistent, but it involves an assumption that cannot be tested experimentally, viz., that the most complete possible specification of an individual system is in terms of a wave function that determines only probable results of actual measurement processes. The only way of investigating the truth of this assumption is by trying to find some other interpretation of the quantum theory in terms of at present "hidden" variables, which in principle determine the precise behavior of an individual system, but which are in practice averaged over in measurements of the types that can now be carried out. In this paper and in a subsequent paper, an interpretation of the quantum theory in terms of just such "hidden" variables is suggested. It is shown that as long as the mathematical theory retains its present general form, this suggested interpretation leads to precisely the same results for all physical processes as does the usual interpretation. Nevertheless, the suggested interpretation provides a broader conceptual framework than the usual interpretation, because it makes possible a precise and continuous description of all processes, even at the quantum level. This broader conceptual framework allows more general mathematical formulations of the theory than those allowed by the usual interpretation. Now, the usual mathematical formulation seems to lead to insoluble difficulties when it is extrapolated into the domain of distances of the order of 10-13 cm or less. It is therefore entirely possible that the interpretation suggested here may be needed for the resolution of these difficulties. In any case, the mere possibility of such an interpretation proves that it is not necessary for us to give up a precise, rational, and objective description of individual systems at a quantum level of accuracy. Book Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies Article A potential barrier of the kind studied by Fowler and others may be represented by the analytic function V (Eq. (1)). The Schrödinger equation associated to this potential is soluble in terms of hypergeometric functions, and the coefficient of reflection for electrons approaching the barrier with energy W is calculable (Eq. (15)). The approximate formula, 1-ρ=exp{-∫4πh(2m(V-W))12dx} is shown to agree very well with the exact formula when the width of the barrier is great compared to the de Broglie wave-length of the incident electron, and W<Vmax. Article Despite its enormous practical success, quantum theory is so contrary to intuition that, even after 45 years, the experts themselves still do not all agree what to make of it. The area of disagreement centers primarily around the problem of describing observations. Formally, the result of a measurement is a superposition of vectors, each representing the quantity being observed as having one of its possible values. The question that has to be answered is how this superposition can be reconciled with the fact that in practice we only observe one value. How is the measuring instrument prodded into making up its mind which value it has observed? Could the solution to the dilemma of indeterminism be a universe in which all possible outcomes of an experiment actually occur? Article DOI:https://doi.org/10.1103/RevModPhys.29.454 Article The Statistical Interpretation of quantum theory is formulated for the purpose of providing a sound interpretation using a minimum of assumptions. Several arguments are advanced in favor of considering the quantum state description to apply only to an ensemble of similarily prepared systems, rather than supposing, as is often done, that it exhaustively represents an individual physical system. Most of the problems associated with the quantum theory of measurement are artifacts of the attempt to maintain the latter interpretation. The introduction of hidden variables to determine the outcome of individual events is fully compatible with the statistical predictions of quantum theory. However, a theorem due to Bell seems to require that any such hidden-variable theory which reproduces all of quantum mechanics exactly (i.e., not merely in some limiting case) must possess a rather pathological character with respect to correlated, but spacially separated, systems. Article An adaptive grid approach to a computational study of the scattering of a wavepacket from a repulsive Eckart barrier is described. The grids move in an arbitrary Lagrangian–Eulerian (ALE) framework and a hybrid of the moving path transform of the Schrödinger equation and the hydrodynamic equations are used for the equations of motion. Boundary grid points follow Lagrangian trajectories and interior grid points follow non-Lagrangian paths. For the hydrodynamic equations the interior grid points are equally spaced between the evolving Lagrangian boundaries. For the moving path transform of the Schrödinger equation interior grid distribution is determined by the principle of equidistribution, and by using a grid smoothing technique these grid points trace a path that continuously adapts to reflect the dynamics of the wavepacket. The moving grid technique is robust and allows accurate computations to be obtained with a small number of grid points for wavepacket propagation times exceeding 5 ps. Article Although Bohmian mechanics has attracted considerable interest as a causal interpretation of quantum mechanics, it also possesses intrinsic heuristic value, arising from calculational tools and physical insights that are unavailable in `standard'' quantum mechanics. We illustrate by examining the behavior of Gaussian harmonic oscillator wave packets from the Bohmian perspective. By utilizing familiar classical concepts and techniques, we obtain a physically transparent picture of packet behavior. This example provides, at a level accessible to students, a concrete illustration of Bohmian mechanics as a heuristic device that can enhance both understanding and discovery. Article A novel method for integrating the time-dependent Schrödinger equation is presented. Hydrodynamic quantum trajectories are used to adaptively define the boundaries and boundary conditions of a fixed grid. The result is a significant reduction in the number of grid points needed to perform accurate calculations. The Eckart barrier, along with uphill and downhill ramp potentials, was used to evaluate the method. Excellent agreement with fixed boundary grids was obtained for each example. By moving only the boundary points, stability was increased to the level of the full fixed grid. Article The de Broglie-Bohm causual (hydrodynamic) formulation of quantum mechanics is computationally implemented in the Lagrangian (moving with the fluid) viewpoint. The quantum potential and force are accurately evaluated with a moving weighted least squares algorithm. The quantum trajectory method is then applied to barrier tunneling on smooth potential surfaces. Analysis of the tunneling mechanism leads to a novel and accurate approximation: shortly after the wave packet is launched, completely neglect all quantum terms in the dynamical equations for motion along the tunneling coordinate. Article The demonstrations of von Neumann and others, that quantum mechanics does not permit a hidden variable interpretation, are reconsidered. It is shown that their essential axioms are unreasonable. It is urged that in further examination of this problem an interesting axiom would be that mutually distant systems are independent of one another. Article The quantum hydrodynamic equations associated with the de Broglie–Bohm formulation of quantum mechanics are solved using a meshless method based on a moving least squares approach. An arbitrary Lagrangian–Eulerian frame of reference is used which significantly improves the accuracy and stability of the method when compared to an approach based on a purely Lagrangian frame of reference. A regridding algorithm is implemented which adds and deletes points when necessary in order to maintain accurate and stable calculations. It is shown that unitarity in the time evolution of the quantum wave packet is significantly improved by propagating using averaged fields. As nodes in the reflected wave packet start to form, the quantum potential and force become very large and numerical instabilities occur. By introducing artificial viscosity into the equations of motion, these instabilities can be avoided and the stable propagation of the wave packet for very long times becomes possible. Results are presented for the scattering of a wave packet from a repulsive Eckart barrier. © 2003 American Institute of Physics. Article Rather general expressions are derived which represent the semiclassical time‐dependent propagator as an integral over initial conditions for classical trajectories. These allow one to propagate time‐dependent wave functions without searching for special trajectories that satisfy two‐time boundary conditions. In many circumstances, the integral expressions are free of singularities and provide globally valid uniform asymptotic approximations. In special cases, the expressions for the propagators are related to existing semiclassical wave function propagation techniques. More generally, the present expressions suggest a large class of other, potentially useful methods. The behavior of the integral expressions in certain limiting cases is analyzed to obtain simple formulas for the Maslov index that may be used to compute the Van Vleck propagator in a variety of representations. Article The origin of quantum interference characteristic of bound nonlinear systems is investigated within the Bohmian formulation of time-dependent quantum mechanics. By contrast to time-dependent semiclassical theory, whereby interference is a consequence of phase mismatch between distinct classical trajectories, the Bohmian, fully quantum mechanical expression for expectation values has a quasiclassical appearance that does not involve phase factors or cross terms. Numerical calculations reveal that quantum interference in the Bohmian formulation manifests itself directly as sharp spatial/temporal variations of the density surrounding kinky trajectories. These effects are most dramatic in regions where the underlying classical motion exhibits focal points or caustics, and crossing of the Bohmian trajectories is prevented through extremely strong and rapidly varying quantum mechanical forces. These features of Bohmian dynamics, which constitute the hallmark of quantum interference and are ubiquitous in bound nonlinear systems, represent a major source of instability, making the integration of the Bohmian equations extremely demanding in such situations. © 2003 American Institute of Physics. Article The quantum trajectory method (QTM) was recently developed to solve the hydrodynamic equations of motion in the Lagrangian, moving-with-the-fluid, picture. In this approach, trajectories are integrated for N fluid elements (particles) moving under the influence of both the force from the potential surface and from the quantum potential. In this study, distributed approximating functionals (DAFs) are used on a uniform grid to compute the necessary derivatives in the equations of motion. Transformations between the physical grid where the particle coordinates are defined and the uniform grid are handled through a Jacobian, which is also computed using DAFs. A difficult problem associated with computing derivatives on finite grids is the edge problem. This is handled effectively by using DAFs within a least squares approach to extrapolate from the known function region into the neighboring regions. The QTM–DAF is then applied to wave packet transmission through a one-dimensional Eckart potential. Emphasis is placed upon computation of the transmitted density and wave function. A problem that develops when part of the wave packet reflects back into the reactant region is avoided in this study by introducing a potential ramp to sweep the reflected particles away from the barrier region. © 2000 American Institute of Physics. Article Numerical solutions of the quantum time-dependent integro-differential Schrödinger equation in a coherent state Husimi representation are investigated. Discretization leads to propagation on a grid of nonorthogonal coherent states without the need to invert an overlap matrix, with the further advantage of a sparse Hamiltonian matrix. Applications are made to the evolution of a Gaussian wave packet in a Morse potential. Propagation on a static rectangular grid is fast and accurate. Results are also presented for a moving rectangular grid, guided at its center by a mean classical path, and for a classically guided moving grid of individual coherent states taken from a Monte Carlo ensemble. © 2000 American Institute of Physics. Article A hydrodynamic approach is developed to describe nonadiabatic nuclear dynamics. We derive a hierarchy of hydrodynamic equations which are equivalent to the exact quantum Liouville equation for coupled electronic states. It is shown how the interplay between electronic populations and coherences translates into the coupled dynamics of the corresponding hydrodynamic fields. For the particular case of pure quantum states, the hydrodynamic hierarchy terminates such that the dynamics may be described in terms of the local densities and momentum fields associated with each of the electronic states. © 2001 American Institute of Physics. Article A new method is proposed for computing the time evolution of quantum mechanical wave packets. Equations of motion for the real-valued functions C and S in the complex action = C(r,t)+iS(r,t)/ℏ, with ψ(r,t) = exp(), involve gradients and curvatures of C and S. In previous implementations of the hydrodynamic formulation, various time-consuming fitting techniques of limited accuracy were used to evaluate these derivatives around each fluid element in an evolving ensemble. In this study, equations of motion are developed for the spatial derivatives themselves and a small set of these are integrated along quantum trajectories concurrently with the equations for C and S. Significantly, quantum effects can be included at various orders of approximation, no spatial fitting is involved, there are no basis set expansions, and single quantum trajectories (rather than correlated ensembles) may be propagated, one at a time. Excellent results are obtained when the derivative propagation method is applied to anharmonic potentials involving barrier transmission. © 2003 American Institute of Physics. Article Recently, the quantum trajectory method (QTM) has been utilized in solving several quantum mechanical wave packet scattering problems including barrier transmission and electronic nonadiabatic dynamics. By propagating the real-valued action and amplitude functions in the Lagrangian frame, only a fraction of the grid points needed for Eulerian fixed-grid methods are used while still obtaining accurate solutions. Difficulties arise, however, near wave functionnodes and in regions of sharp oscillatory features, and because of this many quantum mechanical problems have not yet been amenable to solution with the QTM. This study proposes a hybrid of both the Lagrangian and Eulerian techniques in what is termed the arbitrary Lagrangian–Eulerian method (ALE). In the ALE method, an additional equation of motion governing the momentum of the grid points is coupled into the quantum hydrodynamicequations. These new “quasi-” Bohmian trajectories can be dynamically adapted to the emergent features of the time evolving hydrodynamic fields and are non-Lagrangian. In this study it is shown that the ALE method applied to an uphill ramp potential that was previously unsolvable by the current Lagrangian QTM not only yields stable transmission probabilities with accuracies comparable to that of a high resolution Eulerian method, but does so with a small number of grid points and for extremely long propagation times. To determine the grid point positions at each new time, an equidistribution method is used that is constructed similar to the stiffness matrix of a classical spring system in equilibrium. Each “smart” spring is dependent on a local function M(x) called the monitor function which can sense gradients or curvatures of the fields surrounding its position. To constrain grid points from having zero separation and possible overlap, a new system of equations is derived that includes a minimum separation parameter which prevents this from occurring. Article It is shown that the quantum force in the Bohmian formulation of quantum mechanics can be related to the stability properties of the given trajectory. In turn, the evolution of the stability properties is governed by higher order derivatives of the quantum potential, leading to an infinite hierarchy of coupled differential equations whose solution specifies completely all aspects of the dynamics. Neglecting derivatives of the quantum potential beyond a certain order allows truncation of the hierarchy, leading to approximate Bohmian trajectories. Use of the method in conjunction with Bohmian initial value formulations [J. Chem. Phys. 2003, 119, 60] gives rise to simple position-space representations of observables or time correlation functions. These are analogous to approximate quasiclassical expressions based on the Wigner or Husimi phase space density but involve lower dimensional integrals with smoother integrands and avoid the costly evaluation of phase space transforms. The lowest-order version of the truncated hierarchy can capture large corrections to classical mechanical treatments and yields (with fewer trajectories) results that are somewhat more accurate than those based on quasiclassical phase space treatments. Article The semiclassical (SC) initial value representation (IVR) provides a potentially practical way for adding quantum mechanical effects to classical molecular dynamics (MD) simulations of the dynamics of complex molecular systems (i.e., those with many degrees of freedom). It does this by replacing the nonlinear boundary value problem of semiclassical theory by an average over the initial conditions of classical trajectories. This paper reviews the background and rebirth of interest in such approaches and surveys a variety of their recent applications. Special focus is on the ability to treat the dynamics of complex systems, and in this regard, the forward−backward (FB) version of the approach is especially promising. Several examples of the FB-IVR applied to model problems of many degrees of freedom show it to be capable of describing quantum effects quite well and also how these effects are quenched when some of the degrees of freedom are averaged over (“decoherence”). Article A new procedure for developing a coarse-grained representation of the free particle propagator in Cartesian, cylindrical, and spherical polar coordinates is presented. The approach departs from a standard basis representation of the propagator and the state function to which it is applied. Instead, distributed approximating functions (DAFs), developed recently in the context of propagating wave packets in 1-D on an infinite line, are used to create a coarse-grained, highly banded matrix which produces arbitrarily accurate results for the free propagation of wave packets. The new DAF formalism can be used with nonuniform grid spacings. The banded, discretized matrix DAF representation of <x\exp(-iK-tau/h)\x'> can be employed in any wave packet propagation scheme which makes use of the free propagator. A major feature of the DAF expression for the effective free propagator is that the modulus of the x(j),x(j), element is proportional to the Gaussian exp(-sigma(2)(0)(x(j) - x(f)2/2(sigma(4)(0) + h2-tau(2)/m2)). The occurrence of a tau-dependent width is a manifestation of the fundamental spreading of a wave packet as it evolves through time, and it is the minimum possible because the DAF representation of the free propagator is based on evolving the Gaussian generator of the Hermite polynomials. This suggests that the DAFs yield the most highly banded effective free propagator possible. The second major feature of the DAF representation of the free propagator is that it can be used for real time dynamics based on Feynman path integrals. This holds the possibility that the real time dynamics for multidimensional systems could be done by Monte Carlo methods with a Gaussian as the importance sampling function. Article We review various methods of deriving expressions for quantum-mechanical quantities in the limit when hslash is small (in comparison with the relevant classical action functions). To start with we treat one-dimensional problems and discuss the derivation of WKB connection formulae (and their reversibility), reflection coefficients, phase shifts, bound state criteria and resonance formulae, employing first the complex method in which the classical turning points are avoided, and secondly the method of comparison equations with the aid of which uniform approximations are derived, which are valid right through the turningpoint regions. The special problems associated with radial equations are also considered. Next we examine semiclassical potential scattering, both for its own sake and also as an example of the three-stage approximation method which must generally be employed when dealing with eigenfunction expansions under semiclassical conditions, when they converge very slowly. Finally, we discuss the derivation of semiclassical expressions for Green functions and energy level densities in very general cases, employing Feynman's path-integral technique and emphasizing the limitations of the results obtained. Throughout the article we stress the fact that all the expressions obtained involve quantities characterizing the families of orbits in the corresponding purely classical problems, while the analytic forms of the quantal expressions depend on the topological properties of these families. This review was completed in February 1972. Article Correlations of linear polarizations of pairs of photons have been measured with time-varying analyzers. The analyzer in each leg of the apparatus is an acousto-optical switch followed by two linear polarizers. The switches operate at incommensurate frequencies near 50 MHz. Each analyzer amounts to a polarizer which jumps between two orientations in a time short compared with the photon transit time. The results are in good agreement with quantum mechanical predictions but violate Bell's inequalities by 5 standard deviations. Article In this article, we develop a series of hierarchical mode-coupling equations for the momentum cumulants and moments of the density matrix for a mixed quantum system. Working in the Lagrange representation, we show how these can be used to compute quantum trajectories for dissipative and nondissipative systems. This approach is complementary to the de Broglie–Bohm approach in that the moments evolve along hydrodynamic/Lagrangian paths. In the limit of no dissipation, the paths are the Bohmian paths. However, the “quantum force” in our case is represented in terms of momentum fluctuations and an osmotic pressure. Representative calculations for the relaxation of a harmonic system are presented to illustrate the rapid convergence of the cumulant expansion in the presence of a dissipative environment. © 2002 Wiley Periodicals, Inc. Int J Quantum Chem, 2002 Article The Van Vleck formula is an approximate, semiclassical expression for the quantum propagator. It is the starting point for the derivation of the Gutzwiller trace formula, and through this, a variety of other expansions representing eigenvalues, wave functions, and matrix elements in terms of classical periodic orbits. These are currently among the best and most promising theoretical tools for understanding the asymptotic behavior of quantum systems whose classical analogs are chaotic. Nevertheless, there are currently several questions remaining about the meaning and validity of the Van Vleck formula, such as those involving its behavior for long times. This article surveys an important aspect of the Van Vleck formula, namely, the relationship between it and phase space geometry, as revealed by Maslov's theory of wave asymptotics. The geometrical constructions involved are developed with a minimum of mathematical formalism. Article Es wird gezeigt, da man die Schrdingersche Gleichung des Einelektronen-problems in die Form der hydrodynamischen Gleichungen transformieren kann. Article The purpose of this paper was to justify the fact that deterministic corpuscular description of a free particle can be made reconciled with its dual probabilistic wave description in complex space. It is found that the known wave-particle duality can be best manifested in complex space by showing that the wave motion associated with a material particle is just the phenomenon of projection of its complex motion into real space. To verify this new interpretation of matter wave, the equation of motion for a particle moving in complex space is derived first, then it is solved to reveal how the interaction between the real and imaginary motion can produce the particle’s wave motion observed in real space. The derived complex equation of motion for a “free” particle indicates that a so-called free particle is only free from classical potential, but not free from the complex quantum potential. Due to the action of this complex quantum potential, a free particle may move either right or left in a classical way retaining its corpuscular property, or may oscillate between the two directions producing a non-local wave motion. A propagation criterion is derived in this paper to determine when a particle follows a classical corpuscular motion and when it follows a quantum wave motion. Based on this new interpretation, the internal mechanism producing polarization of matter wave and the formation of interference fringes can all be understood from the particle’s motion in complex space, and the reason why wave function can be served as a probability density function also becomes clear. Article A method is presented for the construction of asymptotic formulas for the large eigenvalues and the corresponding eigenfunctions of boundary value problems for partial differential equations. It is an adaptation to bounded domains of the method previously devised to deduce the corrected Bohr-Sommerfeld quantum conditions.When applied to the reduced wave equation in various domains for which the exact solutions are known, it yields precisely the asymptotic forms of those solutions. In addition it has been applied to an arbitrary convex plane domain for which the exact solutions are not known. Two types of solutions have been found, called the “whispering gallery” and “bouncing ball” modes. Applications have also been made to the Schrödinger equation. Article A discussion of aspects of probability relevant to the differing interpretations of quantum theory is given, followed by an account of so-called orthodox interpretations of quantum theory that stresses their flexibility and subtlety as well as their problems. An account of ensemble interpretations is then presented, with discussion of the approaches of Einstein and Ballentine, and of later developments, including those interpretations usually called “stochastic”. A general study of ensemble interpretations follows, including investigation of PIV (premeasurement initial values) and minimal interpretations, an account of recent developments, and an introduction to unsharp measurements. Finally, application is made to particular problems, EPR, Schrödinger's cat, the quantum Zeno “paradox”, and Bell's theorem. Article A basic aspect of the recently proposed approach to quantum mechanics is that no use of any axiomatic interpretation of the wave function is made. In particular, the quantum potential turns out to be an intrinsic potential energy of the particle, which, similarly to the relativistic rest energy, is never vanishing. This is related to the tunnel effect, a consequence of the fact that the conjugate momentum field is real even in the classically forbidden regions. The quantum stationary Hamilton–Jacobi equation is defined only if the ratio ψD/ψ of two real linearly independent solutions of the Schrödinger equation, and therefore of the trivializing map, is a local homeomorphism of the extended real line into itself, a consequence of the Möbius symmetry of the Schwarzian derivative. In this respect we prove a basic theorem relating the request of continuity at spatial infinity of ψD/ψ, a consequence of the q↔q−1 duality of the Schwarzian derivative, to the existence of solutions of the corresponding Schrödinger equation. As a result, while in the conventional approach one needs the Schrödinger equation with the condition, consequence of the axiomatic interpretation of the wave function, the equivalence principle by itself implies a dynamical equation that does not need any assumption and reproduces both the tunnel effect and energy quantization. Article A justification is given for the use of non-spreading or frozen gaussian packets in dynamics calculations. In this work an initial wavefunction or quantum density operator is expanded in a complete set of grussian wavepackets. It is demonstrated that the time evolution of this wavepacket expansion for the quantum wavefunction or density is correctly given within the approximations employed by the classical propagation of the avarage position and momentum of each gaussian packet, holding the shape of these individual gaussians fixed. The semiclassical approximation is employed for the quantum propagator and the stationary phase approximation for certain integrals is utilized in this derivation. This analysis demonstrates that the divergence of the classical trajectories associated with the individual gaussian packets accounts for the changes in shape of the quantum wavefunction or density, as has been suggested on intuitive grounds by Heller. The method should be exact for quadratic potentials and this is verified by explicitly applying it for the harmonic oscillator example.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541344404220581, "perplexity": 643.5711374595111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00470.warc.gz"}
http://mathhelpforum.com/pre-calculus/9285-need-help-writing-math-problem-expression.html
# Math Help - Need Help Writing Math Problem As Expression 1. ## Need Help Writing Math Problem As Expression Hello, I need help with the following math problem. It needs to be written as an expression. Problem: An experiment is underway to test the effect of extreme temperatures on a newly developed liquid. Two hours into the experiment the temperature of the liquid is measured to be -17 degrees celsius. After eight hours of the experiment, the temp. of the liquid is -47 degrees celsius. Assume that the temperature has been changing at a constant rate throughout the experiment and will continue to do so. Any help with this problem will be greatly appreciated. Thanks! David 2. Hello, David! An experiment tests the effect of extreme temperatures on a liquid. Two hours into the experiment, the temperature of the liquid is -17° C. After eight hours of the experiment, the temp. of the liquid is -47° C. Assume that the temperature has been changing at a constant rate throughout the experiment and will continue to do so. Write an expression for this experiment. Since the change is constant, we have a linear function: .y .= .ax + b . . When x = 2, y = -17: . -17 .= .2a + b . . When x = 8, y = -47: . -47 .= .8a + b We have a system of equations: . [1] .2a + b .= .-17 . . . . . . . . . . . . . . . . . . . . . . . . [2] .8a + b .= .-47 Subtract [1] from [2]: . 6a = -30 . . a = -5 Substitute into [1]: . 2(-5) + b .= .-17 . . b = -7 Therefore, the function is: . y .= .-5x - 7 3. Originally Posted by David7299 Hello, I need help with the following math problem. It needs to be written as an expression. Problem: An experiment is underway to test the effect of extreme temperatures on a newly developed liquid. Two hours into the experiment the temperature of the liquid is measured to be -17 degrees celsius. After eight hours of the experiment, the temp. of the liquid is -47 degrees celsius. Assume that the temperature has been changing at a constant rate throughout the experiment and will continue to do so.... Hello David, I'm not quite certain whether I understand this problem right or not. If the rate of changing is constant then you have to do with an exponential function: Let T(t) be the temperature at the time t. T_0 is the temperature at the time t = 0. T(t)=T_0 * e^(k * t) Be careful not to mix the different tT's. You know: T(2) = -17°C and T(8) = -47°C. Plug in these values into the equation: -17=T_0 * e^(k * 2) -47=T_0 * e^(k * 8) From the first equation you can calculate T_0 = (-17)/(e^(2k)). Plug in this result into the 2nd equation and you'll get k = (ln(47)-ln(17))/6 ≈ 0.16945. And T_0 ≈ -12.112°C. Now you can assemble all these results to an equation: T(t) = -12.112 * e^(0.16945 * t) EB 4. ## Thanks for the help I want to thank both of you so much. It helped me out alot. Thanks for taking the time out and helping. Sincerely, David
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540274143218994, "perplexity": 797.5974526096184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
https://txcorp.com/images/docs/vsim/latest/VSimReferenceManual/vsimComposerParameters.html
Parameters The Parameters element is a location for evaluated, user defined variables that can be used in other elements of the simulation. A parameter is a mathematical combination of constants and other parameters. You may add a new parameter using the Add button under the Elements Tree. kind (not editable): The kind of constant; a User Defined kind. description: A descriptive name of the parameter. expression: This is the user-supplied expression that will be calculated to determine the value of the parameter. It can include any pre-defined Constants as well as real numbers and some functions. Use a “^” to raise numbers to a power. Available functions include: • abs(x): takes the absolute value of “x”. • rint(x): rounds “x” to an integer. • sqrt(x): take the square root of “x”. • sin(x): take the sine of “x”, where “x” is in radians. • cos(x): take the cosine of “x”, where “x” is in radians. • tan(x): take the tangent of “x”, where “x” is in radians. • asin(x): take the arcsine of “x”, where “x” is in radians. • acos(x): take the arccosine of “x”, where “x” is in radians. • atan(x): take the arctangent of “x”, where “x” is in radians. • sinh(x): take the hyperbolic sine of “x”, where “x” is in radians. • cosh(x): take the hyperbolic cosine of “x”, where “x” is in radians. • tanh(x): take the hyperbolic tangent of “x”, where “x” is in radians. • log(x): take the natural log of “x”. • log10(x): take the base 10 log of “x”. • exp(x): raise Euler’s constant to the power “x”. value: The VSim calculated value of the expression.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618039727210999, "perplexity": 2683.414800997266}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00538.warc.gz"}
https://pypi.org/project/scikit-gpuppy/0.9.1/
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (pypi.python.org). Help us improve Python packaging - Donate today! Gaussian Process Uncertainty Propagation with PYthon Project Description https://github.com/snphbaum/scikit-gpuppy This package provides means for modeling functions and simulations using Gaussian processes (aka Kriging, Gaussian random fields, Gaussian random functions). Additionally, uncertainty can be propagated through the Gaussian processes. .. note:: The Gaussian process regression and uncertainty propagation are based on Girard's thesis [#]_. An extension to speed up GP regression is based on Snelson's thesis [#]_. .. warning:: The extension based on Snelson's work is already usable but not as fast as it should be. Additionally, the uncertainty propagation does not yet work with this extension. An additional extension for Inverse Uncertainty Propagation is based on my paper (and upcoming PhD thesis) [#]_. A simulation is seen as a function :math:f(x)+\epsilon (:math:x \in \mathbb{R}^n) with additional random error :math:\epsilon \sim \mathcal{N}(0,v). This optional error is due to the stochastic nature of most simulations. The *GaussianProcess* module uses regression to model the simulation as a Gaussian process. The *UncertaintyPropagation* module allows for propagating uncertainty :math:x \sim \mathcal{N}(\mu,\Sigma) through the Gaussian process to estimate the output uncertainty of the simulation. The *FFNI* and *TaylorPropagation* modules provide classes for propagating uncertainty through deterministic functions. The *InverseUncertaintyPropagation* module allows for propagating the desired output uncertainty of the simulation backwards through the Gaussian Process. This assumes that the components of the input :math:x are estimated from samples using maximum likelihood estimators. Then, the inverse uncertainty propagation calculates the optimal sample sizes for estimating :math:x that lead to the desired output uncertainty of the simulation. .. [#] Girard, A. Approximate Methods for Propagation of Uncertainty with Gaussian Process Models, University of Glasgow, 2004 .. [#] Snelson, E. L. Flexible and efficient Gaussian process models for machine learning, Gatsby Computational Neuroscience Unit, University College London, 2007 .. [#] Baumgaertel, P.; Endler, G.; Wahl, A. M. & Lenz, R. Inverse Uncertainty Propagation for Demand Driven Data Acquisition, Proceedings of the 2014 Winter Simulation Conference, IEEE Press, 2014, 710-721 Release History 0.9.3 0.9.2 This version 0.9.1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199230432510376, "perplexity": 4721.817016087763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00096.warc.gz"}
https://www.computer.org/csdl/proceedings/pci/2010/4172/00/4172a153-abs.html
2012 16th Panhellenic Conference on Informatics (2010) Tripoli, Greece Sept. 10, 2010 to Sept. 12, 2010 ISBN: 978-0-7695-4172-3 pp: 153-157 ABSTRACT The target of this paper is to study the introduced generalized entropy type measures of information and the γ-order generalized Gaussian (or hyper multivariate normal distribution), relative to it. This three parameter distribution plays an important role to the new generalized information measure and extends the well known normal distribution. For the γ-order generalized Gaussian the Kullback-Leibler information is evaluated, which is reduced to the well known for the normal distribution, when the generalized Gaussian is reduced to typical Gaussian distribution. INDEX TERMS Entropy power, Information measures, Kullback-Leibler information CITATION Thomas L. Toulias, Christos P. Kitsos, "Evaluating Information Measures for the Gamma-order Multivariate Gaussian: Entropy Type Information Measures", 2012 16th Panhellenic Conference on Informatics, vol. 00, no. , pp. 153-157, 2010, doi:10.1109/PCI.2010.10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831973910331726, "perplexity": 3599.0103101304476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828411.81/warc/CC-MAIN-20171024105736-20171024125736-00425.warc.gz"}
http://math.stackexchange.com/questions/6645/cube-root-inequality
# Cube Root Inequality How do you prove the inequality \begin{equation*} |\sqrt[3]{x} - \sqrt[3]{y}| \leq \sqrt[3]{|x-y|}? \end{equation*} - I corrected your TeX so that the minus sign was inside dollar signs. –  Mariano Suárez-Alvarez Oct 13 '10 at 2:39 This is false. $x=8$, $y=-1$. –  Aryabhata Oct 13 '10 at 2:49 Do you require the cube roots to be real as well, i.e. x,y \geq 0? –  WWright Oct 13 '10 at 2:52 Following up on Moron's comment, I guess you are missing the condition that $x$ and $y$ have the same sign (in which case you may as well assume that they are both positive). In this case the inequality is true, and the now-deleted comment gave one good method of solution: cube both sides, and compare them to get the desired inequality. (It will help to assume that $x > y$, as you may (otherwise switch them, and nothing changes). Also, for psychological purposes, it may help to write $x^{1/3} = a$ and $y^{1/3} = b$, as the deleted comment suggested.) –  Matt E Oct 13 '10 at 2:55 It's not true. Try $x=1$ and $y=-1$. If you assume $x$ and $y$ have the same sign, and you might as well assume $x\gt y\gt 0$, then it reduces to showing $(x-y)^3\leq x^3-y^3$ (WLOG replacing variables in the original inequality with cubes and cubing both sides). This is true because $x\gt y\gt 0$ implies $3xy^2<3x^2y$. Moron's and Matt E's comments were posted within a few minutes of my answer. Moron noticed what happens when $x$ and $y$ have opposite signs before I posted, and Matt E gave a better written explanation for the same sign case. –  Jonas Meyer Oct 13 '10 at 3:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444083571434021, "perplexity": 421.6028085943857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682773.27/warc/CC-MAIN-20151001215802-00242-ip-10-137-6-227.ec2.internal.warc.gz"}
https://mechanicalland.com/newtons-law-of-cooling-calculator/
# Newton’s Law of Cooling Calculator for Convection ## How to Use Newton's Law of Cooling Calculator? Convection is a very important mechanism of heat transfer. So, engineers need to make lots of calculations of convection. Because there are lots of systems that depend on the convection heat transfer. In other words in the most basic aspects, we calculate the convection heat transfer with Newton’s law of cooling. We prepared Newton’s law of cooling calculator for you. ## How to Use Newton’s Law of Cooling Calculator? You are probably using this equation in the convection heat transfer calculations. And also, you will use this equation many times. So, you can use the calculator below to calculate them easily. ### Newton’s Law of Cooling Calculator So, the use of Newton’s law of cooling calculator is very simple. Firstly, you need to enter the values for convection; • h: Heat transfer coefficient between the surface of the body and the environment. The unit is W/(m2°C) or Btu/(hr-ft2°F) • Sa: A surface area where convection heat transfer takes place. Unit of the surface area is m2 or ft2. • Ts: Temperature of the surface where heat transfer takes place. The units are °C or °F. • Te: Temperature of the environment. Units are °C or °F. • Qconvection = Heat transfer rate between the surface of the body and the environment. Units of convection heat transfer are W or BTU/hr. And then, click on the ‘Calculate!’ button to see the convection heat transfer between the surface and the environment. If you want to make further calculations, click on the ‘Reset’ button, then re-enter the values. ## What is Convection Heat Transfer? Convection heat transfer is one of the three heat transfer mechanisms. Also, these heat transfer mechanisms are conduction and radiation. In convection, heat is transferred by the movement of molecules of fluids. So, transportation of heat energy in convection heat transfer. ## Equation We are using Newton’s law of cooling in convection heat transfer calculations. The equation is like this; So, according to this formula of convection, • So, with the increasing heat transfer coefficient, the total convection heat transfer increases. • Also, there is a direct correlation between the convection and the surface area of the body. • Furthermore, if the surface temperature increases, the convection between the surface and the environment increases. • With the increasing environment temperature, the total convection heat transfer decreases. • Moreover, if the temperature difference between the body and the environment is high, the convection heat transfer is high. ## Conclusion These are the basic aspects of Newton’s law of cooling calculator. It is very easy to use and you can use it in different kinds of engineering calculations. For example, in the heat transfer calculation between the surface of a thermos and the environment, you can use Newton’s law of cooling. Above all, Mechanicalland does not accept any responsibility for calculations made by users in calculators. A good engineer must check calculations again and again. Also, You can find out much more calculators like this in Mechanicalland! Take a look at the other engineering calculators available in Mechanicalland! Finally, do not forget to leave your comments and questions below about Newton’s law of cooling calculator. Your precious feedbacks are very important to us.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316088557243347, "perplexity": 512.3783177341505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00077.warc.gz"}
https://neurips.cc/Conferences/2018/ScheduleMultitrack?event=11785
` Timezone: » Poster Statistical mechanics of low-rank tensor decomposition Jonathan Kadmon · Surya Ganguli Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 210 #60 Often, large, high dimensional datasets collected across multiple modalities can be organized as a higher order tensor. Low-rank tensor decomposition then arises as a powerful and widely used tool to discover simple low dimensional structures underlying such data. However, we currently lack a theoretical understanding of the algorithmic behavior of low-rank tensor decompositions. We derive Bayesian approximate message passing (AMP) algorithms for recovering arbitrarily shaped low-rank tensors buried within noise, and we employ dynamic mean field theory to precisely characterize their performance. Our theory reveals the existence of phase transitions between easy, hard and impossible inference regimes, and displays an excellent match with simulations. Moreover, it reveals several qualitative surprises compared to the behavior of symmetric, cubic tensor decomposition. Finally, we compare our AMP algorithm to the most commonly used algorithm, alternating least squares (ALS), and demonstrate that AMP significantly outperforms ALS in the presence of noise.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006540179252625, "perplexity": 1439.28136155784}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00068.warc.gz"}
https://crypto.stackexchange.com/questions/53597/how-did-someone-discover-n-order-of-g-for-secp256k1
# How did someone discover N, order of G for SECP256k1? Could someone please explain, in simple and easy terms, how the creators did (or should have) derived the N, order of G for SECP256k1? Its my understanding its derived from p = FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE FFFFFC2F aka p = 115792089237316195423570985008687907853269984665640564039457584007908834671663 and that the value itself, of N is N = FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE BAAEDCE6 AF48A03B BFD25E8C D0364141 aka N = 115792089237316195423570985008687907852837564279074904382605163141518161494337 I understand that (N-1) is the total number of valid points on the curve, but how was that determined without going through and trying to count them? I have seen similar questions, but the answers are either using terminologies I don't understand and or don't contain a "simple" and "easy" explanation. I understand that (N-1) is the total number of valid points on the curve Actually no. $N$ is the number of points on the curve, $N-1$ is the number of non-trivial points, where the point at infinity $\mathcal O$ is the trivial point (because it is essentially the $0$ for curves). Could someone please explain, in simple and easy terms, how the creators did (or should have) derived the N, order of G for SECP256k1? First, let's assume we already know the order of the curve, ie the number of points on it. Let's call this number $n$. It turns out that for secp256k1 $n$ is a prime. Now we know by Lagrange's Theorem the order of any subgroup (like the subgroup generated by $G$) of secp256k1 must divide $n$, so either have $1$ or $n$ elements. But the only element that generates a subgroup with one element is $\mathcal O$ (because $\mathcal O+\mathcal O=\mathcal O$) and thus $G$ must have order $n$. But how do we find the order of the curve? It turns out that mathematicians have already solved this problem and found an algorithm to efficiently count the number of points on a curve without actually visiting every single one, it's called Schoof's algorithm. Now the details of this algorithm are very complex and thus I won't go into them here. If you really want to know, you can read the linked article and the references given at the bottom of the page. • I think that above answer is not correct. The infinite point "O" is not a point on the curve. It is just an abstact,or an imaginary point. Just a mathmatical construct supposed to act like an identity element in the addition Group. Hey,can you give us the X and Y coordinates of this so called "trivial" point O? – Yanghwan Lim Feb 7 '18 at 19:33 • the (projective) coordinates of O = [0:0:1] – 111 Feb 7 '18 at 20:25 • @Yangwhan_Lim I think I get what you are trying to say, but there was much more (useful) information in his answer. – Mine Feb 7 '18 at 21:34 • The ‘infinite point “O”’ is a point on the curve. It just doesn't have a representation in affine coordinates, which is part of why affine coordinates are a pain to work with and most sensible algebraic geometers work in some projective coordinate system. – Squeamish Ossifrage Feb 7 '18 at 22:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988853693008423, "perplexity": 414.43523771906564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526228.27/warc/CC-MAIN-20190418181435-20190418203435-00396.warc.gz"}
http://openstudy.com/updates/50e5e94de4b058681f3f1f1b
## Got Homework? ### Connect with other students for help. It's a free community. • across Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 55 members online • 0 viewing ## ERoseM A projectile is launched upward from the ground at 60 m/s. a. How long will it take to reach its highest point? b.) How high does it go? c.) How long does it take to hit the ground? one year ago one year ago Edit Question Delete Cancel Submit • This Question is Open 1. Yahoo! Best Response You've already chosen the best response. 0 Time = u sin 60 /g H = u^2 sin^2 60 /2g T = 2u sin 60/g • one year ago 2. ERoseM Best Response You've already chosen the best response. 0 Thank you but how did you get time and what does the u stand for? • one year ago 3. ERoseM Best Response You've already chosen the best response. 0 I'm sorry I don't understand the u? • one year ago 4. Yahoo! Best Response You've already chosen the best response. 0 u = initial Velocity • one year ago 5. ERoseM Best Response You've already chosen the best response. 0 Thank you but I'm still confused. • one year ago 6. wio Best Response You've already chosen the best response. 1 Okay, we start out with the fundamental idea that everything falls at constant acceleration. We'll call this constant acceleration (with respect to time) $$a$$. We'll plug it in for the real value later. • one year ago 7. wio Best Response You've already chosen the best response. 1 The velocity is the going to be the anti derivative: $v(t) = \int a dt = at+C$We need to find the constant of integration. Suppose the initial velocity is $$v_0$$ (we'll plug it in later). $v(0) = v_0 = a(t) +C = C$So our constant of integration is just the initial velocity $$v_0$$. We now have the following equation: $v(t) = at + v_0$Which is enough to answer question part a • one year ago 8. wio Best Response You've already chosen the best response. 1 a. How long will it take to reach its highest point? <-- Well, the highest point is going to be the maximum position. Remember to find the maximum of a function, you need to find the critical number (when the derivative equals 0). Since velocity is the derivative of position with respect to time, we just need to set it to 0 to find critical numbers: $0 = at + v_0 \implies at = -v_0 \implies t = -v_0/a$So the amount of time it will take is just the initial velocity divided by acceleration. Don't worry about that negative sign. Since initial velocity is upward and gravity is downward, the negatives will cancel out. We will get a positive time value. • one year ago 9. wio Best Response You've already chosen the best response. 1 b.) How high does it go? <-- For this we need our position function. Again we find the anti-derivative.$x(t) = \int v(t)dt = \int at+v_0dt = \frac{1}{2}at^2+v_0t+C$We have another constant of integration. Again we'll suppose the initial distance of $$x_0$$. $x(0) = x_0 = \frac{1}2{}a(0)^2 + v_0(0) + C = C$So once again, our constant of integration is just the initial value. $x(t) = \frac{1}{2}at^2+v_0t+x_0$ We know it will reach it's highest value at $$t = -v_0/a$$, so let's plug that in: $\begin{split} x(-v_0/a) &= \frac{1}{2}a(-v_0/a)^2+v_0(-v_0/a)+x_0 \\ &= \frac{1}{2}\frac{v_0^2}{a}-\frac{v_0^2}{a}+x_0 \\ &= -\frac{1}{2}\frac{v_0^2}{a}+x_0 \\ &= -\frac{v_0^2}{2a}+x_0 \\ \end{split}$ So we have our max distance being: $x_{max} = -\frac{v_0^2}{2a}+x_0$ • one year ago 10. ERoseM Best Response You've already chosen the best response. 0 Okay, I understand that. • one year ago 11. ERoseM Best Response You've already chosen the best response. 0 Now how do I apply real numbers? • one year ago 12. ERoseM Best Response You've already chosen the best response. 0 Like where does the sixty actually belong? • one year ago 13. wio Best Response You've already chosen the best response. 1 c.) How long does it take to hit the ground? <-- So we will call the position of the ground $$x_g$$. We need to solve for $$t$$ given $$x(t) = x_g$$$x_g = \frac{1}{2}at^2 + v_0t +x_0 \implies 0 = \frac{1}{2}at^2 + v_0t +x_0 -x_g$This is just the root of a quadratic equation. $\begin{split} t &= \frac{-(v_0)\pm \sqrt{(v_0)^2-4(\frac{1}{2}a))(x_0-x_g)}}{2(\frac{1}{2}a)} \\ &= \frac{-v_0\pm \sqrt{v_0^2-2a(x_0-x_g)}}{a} \end{split}$ This is giving us two times. The earlier time is just when the projectile left the ground, we want the later time, when it hits the ground. Since the $$\sqrt{\ }$$ is going to yield a positive value, we set the $$\pm$$ to $$+$$ to get the later time. $t=\frac{-v_0 + \sqrt{v_0^2-2a(x_0-x_g)}}{a}$ • one year ago 14. wio Best Response You've already chosen the best response. 1 Now we can plug in actual numbers. What we need are $a \\ v_0 \\ x_0 \\ x_g$ • one year ago 15. wio Best Response You've already chosen the best response. 1 Let's say that upward means 'positive' position, and downward means 'negative' position. $$a$$ is gravitational acceleration downward. $$-9.8m/s^2$$ $$v_0$$ is our initial velocity: $$60m/s$$ $$x_0$$ is our initial position: $$0m$$ $$x_g$$ is the position of the ground: $$0m$$ • one year ago 16. wio Best Response You've already chosen the best response. 1 Taking a second look... I think we should try both signs of $$\pm$$ rather than just the $$+$$, my assumption that that $$+$$ version would be higher was incorrect. • one year ago 17. wio Best Response You've already chosen the best response. 1 Recap: $\begin{array}{rcl} t_{top} &=& -v_0/a \\ x_{top} &=& -\frac{v_0^2}{2a}+x_0 \\ t_{ground} &=& \max\left(\frac{-v_0+ \sqrt{v_0^2-2a(x_0-x_g)}}{a}, \frac{-v_0- \sqrt{v_0^2-2a(x_0-x_g)}}{a} \right) \\ a &=& -9.8m/s^2 \\ v_0 &=& 60m/s \\ x_0 &=& 0m \\ x_g &=& 0m \end{array}$ • one year ago • Attachments: ## See more questions >>> ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951508045196533, "perplexity": 1754.9298284805084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999657010/warc/CC-MAIN-20140305060737-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1195967/endomorphism-ring-of-a-vector-space
# Endomorphism ring of a vector space I have proof of this but not getting please help me. If $End_{K}(V)$ is a simple ring then $V$ is finite dimensional vector space. proof:- Let us assume that $V$ be not a finite dimensional vector space over field $K$. Define $I=\{f \in End_{K}(V)\mid\dim_{K}f(V)<\infty\}$ $0 \in I$ and $1\notin I$ $(0)\subseteq I \neq End_{K}(V)$ Now to show that I is both sided ideal we get contradiction. i am not getting it from my text book If the image of $f$ and $g$ are finite dimensional, show that the images of $-f$ and $f+g$ are finite dimensional also. For any $h\in End_K(V)$, give a reason why $fh$ has finite dimensional image. Finally, why should the image of $hf$ be finite dimensional? • How we can prove if $V$ is finite dimensional then $End(V)$ is simple ring ? – user197636 Mar 18 '15 at 20:11 • @user197636 You can show it's isomorphic to the full ring of square matrices $M_n(F)$ for some $n$, and then it's simple for this reason. – rschwieb Mar 19 '15 at 1:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629704713821411, "perplexity": 112.44284558742179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00501.warc.gz"}
https://www.helsinki.fi/en/news/science-news/finnish-researchers-have-discovered-a-new-type-of-matter-inside-neutron-stars
# Finnish researchers have discovered a new type of matter inside neutron stars 1.6.2020 A Finnish research group has found strong evidence for the presence of exotic quark matter inside the cores of the largest neutron stars in existence. The conclusion was reached by combining recent results from theoretical particle and nuclear physics to measurements of gravitational waves from neutron star collisions. All normal matter surrounding us is composed of atoms, whose dense nuclei, comprising protons and neutrons, are surrounded by negatively charged electrons. Inside what are called neutron stars, atomic matter is, however, known to collapse into immensely dense nuclear matter, in which the neutrons and protons are packed together so tightly that the entire star can be considered one single enormous nucleus. Up until now, it has remained unclear whether inside the cores of the most massive neutron stars nuclear matter collapses into an even more exotic state called quark matter, in which the nuclei themselves no longer exist. Researchers from the University of Helsinki now claim that the answer to this question is yes. The new results were published in the prestigious journal Nature Physics. “Confirming the existence of quark cores inside neutron stars has been one of the most important goals of neutron star physics ever since this possibility was first entertained roughly 40 years ago,” says Associate Professor Aleksi Vuorinen from the University of Helsinki’s Department of Physics and Helsinki Institute of Physics. ## Existence very likely With even large-scale simulations run on supercomputers unable to determine the fate of nuclear matter inside neutron stars, the Finnish research group proposed a new approach to the problem. They realised that by combining recent findings from theoretical particle and nuclear physics with astrophysical measurements, it might be possible to deduce the characteristics and identity of matter residing inside neutron stars. In addition to Vuorinen, the group includes doctoral student Eemeli Annala from Helsinki, as well as their colleagues Tyler Gorda from the University of Virginia, Aleksi Kurkela from CERN, and Joonas Nättilä from Columbia University. According to the study, matter residing inside the cores of the most massive stable neutron stars bears a much closer resemblance to quark matter than to ordinary nuclear matter. The calculations indicate that in these stars the diameter of the core identified as quark matter can exceed half of that of the entire neutron star. However, Vuorinen points out that there are still many uncertainties associated with the exact structure of neutron stars. What does it mean to claim that quark matter has almost certainly been discovered? “There is still a small but nonzero chance that all neutron stars are composed of nuclear matter alone. What we have been able to do, however, is quantify what this scenario would require. In short, the behaviour of dense nuclear matter would then need to be truly peculiar. For instance, the speed of sound would need to reach almost that of light,” Vuorinen explains. ## Radius determination from gravitational wave observations A key factor contributing to the new findings was the emergence of two recent results in observational astrophysics: the measurement of gravitational waves from a neutron star merger and the detection of very massive neutron stars, with masses close to two solar masses. In the autumn of 2017, the LIGO and Virgo observatories detected, for the first time, gravitational waves generated by two merging neutron stars. This observation set a rigorous upper limit for a quantity called tidal deformability, which measures the susceptibility of an orbiting star’s structure to the gravitational field of its companion. This result was subsequently used to derive an upper limit for the radii of the colliding neutron stars, which turned out to be roughly 13 km. Similarly, while the first observation of a neutron star dates back all the way to 1967, accurate mass measurements of these stars have only been possible for the past 20 years or so. Most stars with accurately known masses fall inside a window of between 1 and 1.7 stellar masses, but the past decade has witnessed the detection of three stars either reaching or possibly even slightly exceeding the two-solar-mass limit. ## Further observations expected Somewhat counterintuitively, information about neutron star radii and masses has already considerably reduced the uncertainties associated with the thermodynamic properties of neutron star matter. This has also enabled completing the analysis presented by the Finnish research group in their Nature Physics article. In the new analysis, the astrophysical observations were combined with state-of-the-art theoretical results from particle and nuclear physics. This enabled deriving an accurate prediction for what is known as the equation of state of neutron star matter, which refers to the relation between its pressure and energy density. An integral component in this process was a well-known result from general relativity, which relates the equation of state to a relation between the possible values of neutron star radii and masses. Since the autumn of 2017, a number of new neutron star mergers have been observed, and LIGO and Virgo have quickly become an integral part of neutron star research. It is precisely this rapid accumulation of new observational information that plays a key role in improving the accuracy of the new findings of the Finnish research group, and in confirming the existence of quark matter inside neutron stars. With further observations expected in the near future, the uncertainties associated with the new results will also automatically decrease. “There is reason to believe that the golden age of gravitational wave astrophysics is just beginning, and that we will shortly witness many more leaps like this in our understanding of nature,” Vuorinen rejoices. Reference: Eemeli Annala, Tyler Gorda, Aleksi Kurkela, Joonas Nättilä and Aleksi Vuorinen, Evidence for quark-matter cores in massive neutron stars, Nature Physics, June 1, 2020. DOI: 10.1038/s41567-020-0914-9 https://www.nature.com/articles/s41567-020-0914-9 ### Further information Aleksi Vuorinen, Associate Professor, University of Helsinki +358 50 338 6725 [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353243708610535, "perplexity": 665.456398674473}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733122.72/warc/CC-MAIN-20201204040803-20201204070803-00137.warc.gz"}
http://physics.stackexchange.com/questions/131988/question-about-the-exclusion-principle
# Question about the exclusion principle I understand the Pauli exclusion principle like this: 1. For two electrons to occupy the same state their spins must be opposite. 2. If the two electrons are in different states (different spatial wave-functions) their spins are allowed to be parallel if this is otherwise energetically favorable. The question is this: How different must their wave-functions be for them to allow parallel spins? Or posing the question differently: If we start to "deform" one of the wave-functions so that it becomes increasingly similar to the other wave-function, at which point will the spins be forced to become anti-parallel? At the point where the wave-functions becomes identical? I hope this makes sense.. Anyway, thanks in advance! - this is a discrete symmetry, so "continous parts of the wavefunction" are identical and are not deformed into one-another, it is a discrete symmetry under permutations of particles –  Nikos M. Aug 21 at 23:41 The whole state of the $2$ electrons has to be antisymmetric. If the spin part is symmetric, then the spatial part has to be antisymmetric, that is $\psi(x_1,x_2) \sim \psi_1(x_1) \psi_2(x_2) - \psi_2(x_1) \psi_1(x_2)$. –  Trimok Aug 22 at 10:20 The wave function isn't something that can be "deformed" in the way that you are thinking. The possible states of an electron in the vicinity of a proton can be found by solving Schrodinger's equation. This gives a discrete set of bound state solutions (energy < 0), labelled by the quantum numbers n and l (and also s, j, etc. once various spin effects have been included that split all the degeneracies in the naive solution). There is also a continuum of positive energy solutions, which correspond to an unbound electron scattered by a nucleus. An actual state of an electron is some superposition of these different energy eigenstates. So, the meaning of a "deformation" of the wave function is actually that the electron is in a superposition of two different states (say two bound states, or a bound state and a free state). The states of the two electrons in this (presumably helium or a hydrogen ion) atom must be entangled. So, if one electron is in a superposition of two states (i.e. with a "deformed" wave function), its state is something like $$\alpha \left| n_1, l_1, s_1 \right> + \beta \left| n_2, l_2, s_2 \right>$$ where $\alpha \gg \beta$, so that the electron is predominantly in the first state. This is an incomplete description of the state of the atom, however. Accounting for the other electron, the state is something like $$\alpha \left| n_1, l_1, s_1 \right> \otimes \left| \text{other electron state 1} \right> + \beta \left| n_2, l_2, s_2 \right> \otimes \left| \text{other electron state 2} \right>$$ Here's the thing to remember: Pauli's exclusion principle applies separately to each entangled term in this superposition. The situation pertinent to your question is where the other electron, in the first state, shares the same orbital as the first electron. By the exclusion principle, it must have opposite spin: $$\alpha \left| n_1, l_1, \text{up} \right> \otimes \left| n_1, l_1, \text{down} \right> + \beta \left| n_2, l_2, s_2 \right> \otimes \left| \text{other electron state 2} \right>$$ Now, Pauli's exclusion principle would say that if "other electron state 2" is in the orbital n2, l2, then its spin must be the opposite of s2. I don't think this is the scenario you have in mind. I believe you are imagining that the "other electron state 2" is the same as "other electron state 1", i.e. that only the first electron's wave function is in a superposition (i.e. deformed). So, the state of the system is $$\alpha \left| n_1, l_1, \text{up} \right> \otimes \left| n_1, l_1, \text{down} \right> + \beta \left| n_2, l_2, s_2 \right> \otimes \left| n_1, l_1, \text{down} \right>$$ What does the exclusion principle say in this case? Nothing extra. By definition n1/l1 and n2/l2 are in different orbitals, so the exclusion principle does not apply to the second component in the superposition. s2 could be spin up or spin down. There is no "threshold" at which the spins are forced to be parallel. There is only the probability $$\frac{\alpha^2}{\alpha^2 + \beta^2}$$ that, when measured, the system is in the first state and the spins are opposite. - Think about the two fermions as two traveling wave-packets, more or less overlapping. At which point (where the packets are very similar (and very "overlapping") are the spins forced to become anti-parallel? –  Carl Nilsson Aug 21 at 19:40 If the fermions are charged, then they repel/attract and the above applies. If they are electrically neutral (e.g. neutrons) there should still be interactions (strong or weak). I don't think these can be ignored in reality. In theory, what about two non-interacting fermions, travelling as plane waves? Plane wave states represent a continuum, making them tricky. Still, exclusion only applies between exactly equal momentum vectors. The two particle wavefunction must satisfy f(k1,k2) = 0 for k1 = k2. There are no real requirements regarding k1 approaching k2, except continuity. –  jwimberley Aug 21 at 22:02 You might want to look into the bonding and anti bonding states of the hydrogen molecule. (scroll down to page 14 here, (just my first hit on a google search) http://www4.ncsu.edu/~franzen/public_html/CH431/lecture/lec_10.pdf The anti bonding state is with the spins parallel. You can see the energy difference as you bring the two atoms together. At zero temperature the electrons are always going to want to go into the lower energy bonding state. (Even if the two atoms are separated by (say) 10 times the atomic radius.) But at some finite temperature you'll get the atoms sometimes in the anti bonding state. Does that help? - Thanks George but I think not after having a look. Let me rephrase the question: Two fermions (lets also imagine they are electrically neutral) are emitted practically simultaneously from an apparatus with virtually the same velocity, direction etc so that their wave-packet "description" are almost the same. How similar can the wave-packets be while still allowing parallel spins? At which "point" of similarity are the spins forced to become anti-parallel? Does this clarify? –  Carl Nilsson Aug 21 at 20:13 Not at all. To be in a localized state the particles need to be some how localized. Wrapped up in an atom or quantum well. In free space you can use plane waves states.. they go on forever. –  George Herold Aug 22 at 1:44 George, I think traveling wave-packets can be fairly localized? At least for some time.. Sharply localized wave-packets will disperse quickly though. But this will be the same for both particles and should not matter. Actually for this discussion it should not matter whether the particles are in plane wave states or in localized packets(?) According to jwimberleys comment above (if correct) in plane wave states the fermions can have parallel spins unless the wave-vectors are exactly the same.. From this I guess the fermions can have parallel spins even if their states are "nearly" the same. –  Carl Nilsson Aug 22 at 13:54 @CarlNilsson, OK I like the hydrogen molecule picture for thinking about this. I think the short answer is wavefunction overlap. If the two electron wavefunctions overlap there will be some energy associated with the parallel spin case. You could also do the case of two 1-D square well potentials with a barrier in between and look at what happens as the barrier is made lower (or thinner). –  George Herold Aug 22 at 17:24 The exclusion principle is only a feature of fermions' statistics, not something that can be dynamically forced over a system. Fermion statistics is mathematically taken into account considering antisymmetric wavefunctions for fermions. The Hilbert space of fermion particles is the antisymmetric Fock space. Let $\mathscr{H}_1$ be the one-particle space and $\mathscr{H}_0=\mathbb{C}$ the vacuum. Then define $$\mathscr{H}_n=\underbrace{\mathscr{H}_1\otimes_a\dotsc\otimes_a \mathscr{H}_1}_n\; ;$$ where $\otimes_a$ is the antisymmetric tensor product. The (antisymmetric) Fock space $\Gamma_a(\mathscr{H}_1)$ over $\mathscr{H}_1$ is defined as $$\Gamma_a(\mathscr{H}_1)=\bigoplus_{n=0}^\infty \mathscr{H}_n\; ,$$ where $\oplus$ stands for the direct sum of Hilbert spaces. $\mathscr{H}_n$ is the $n$-particles subspace. As an example, here it is the formula for the tensor product of two vectors $\psi$ and $\phi$: $$\psi\otimes_a \phi = (\psi\otimes\phi-\phi\otimes\psi)/2\; .$$ It is then clear that an antisymmetric tensor product of $n$ nonzero vectors $\{\psi_i\}_{i=1}^n$ is zero if and only if $\psi_i=\alpha\psi_j$ for some $i,j\in\{1,\dotsc,n\}$ and $\mathbb{C}\ni\alpha\neq 0$. The condition $\psi_i=\alpha\psi_j$ is the mathematical equivalent of saying that there are two particles in the same state, and yields zero for the total fermionic wavefunction. Thus, we can say (naïvely) that as long as the state of any fermion particle is different from the others, the wavefunction is not zero. But the quantum dynamics is implemented by a unitary operator on the Hilbert space, in this case $\Gamma_a(\mathscr{H}_1)$. Let's call this group $(U(t))_{t\in\mathbb{R}}$. Unitary operators preserve the Hilbert space norm, so given any nonzero vector $\Gamma_a(\mathscr{H}_1)\ni \Phi\neq 0$, then $U(t)\Phi\neq 0$ for all $t\in\mathbb{R}$. Therefore there will be no physical way of "deforming" the wavefunction to become zero, or to "force" the spins to move in order to obey the exclusion principle. It is simply a feature of the antisymmetric Fock space, and any meaningful (non-zero) vector will remain non-zero as the dynamics evolve it. - Thanks! As I understand your answer: As long as there is an infinitesimal difference between the two states the spins can remain parallel? –  Carl Nilsson Aug 21 at 23:31 @CarlNilsson It depends what you mean by "infinitesimal difference". Think at finite dimensional vector spaces (for simplicity): let $(v_i)_{i\in\mathbb{N}}$ be a suite of vectors converging to $v$ as $n\to\infty$. You may have, for all $i,j\in\mathbb{N}$, $v_i/\lVert v_i\rVert=v_i/\lVert v_i\rVert=v/\lVert v\rVert$; or $v_i/\lVert v_i\rVert\neq v_i/\lVert v_i\rVert\neq v/\lVert v\rVert$ for all $i,j\in\mathbb{N}$. But in the first case the wavefunction $v_i\otimes_a v=0$ for all $i\in\mathbb{N}$; in the second $v_i\otimes_a v\neq 0$ for all $i\in\mathbb{N}$. –  yuggib Aug 22 at 8:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806273937225342, "perplexity": 423.72280227080483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648250.59/warc/CC-MAIN-20141024030048-00262-ip-10-16-133-185.ec2.internal.warc.gz"}
https://eap.bl.uk/collection/EAP329-1?f%5B0%5D=sm_languages%3AArabic
# Teungku Mukhlis Collection [17th - 20th century] Digital images of 118 manuscripts owned by Teungku Mukhlis of Calue, Pidie Regency. The manuscripts cover a range of subjects including Islamic law, Sufism, theology and fiction, in prose and poetic form; they range in date from the 17th to the 20th century. Many of the manuscripts suffer from humidity and insect damage, and most of them are incomplete, missing covers and pages. Showing 1 to 15 of 179 results • ### Teungku Ainal Mardhiah Collection [17th century-20th century] Collection Ref: EAP329/10 The manuscripts contain of Islamic Law, Sufism, Theology, history and fictive stories. All of the manuscripts have been hand written. The manuscripts are in bad condition. Some of them have complete text and some others have not because their first and las ... • ### Arabic grammar [19th century] File Ref: EAP329/1/1 The paper used in the manuscript is European paper without watermark. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The manuscript consists ... • ### Text on Qu'ranic exegesis; al-Khafi surah [19th century] File Ref: EAP329/1/2 The paper used in the manuscript is European paper without watermark. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black ink. The manuscript consists 1 text, ... • ### Text concerning Islamic law [19th century] File Ref: EAP329/1/3 The paper used in the manuscript is European paper with chain and laid lines. There is no page number, but it has catchword. under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The manuscript ... • ### Sufic texts [18th century] File Ref: EAP329/1/4 The paper used in the manuscript is European paper with watermark of three crescents with moonface. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. It is mentioned in the colophon that the author ... • ### Hud Huda [19th century] File Ref: EAP329/1/5 The paper used in the manuscript is European paper without watermark. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The manuscript consists ... • ### Bidayat al-Mubtadi [18th century] File Ref: EAP329/1/6 The paper used in the manuscript is European paper with watermark three crescents. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The manusc ... • ### Kasyf al-Kiram [18th century] File Ref: EAP329/1/8 The paper used in the manuscript is European with watermark of eagle. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The manuscript consists ... • ### Text concerning Islamic law [18th century] File Ref: EAP329/1/9 The paper used in the manuscript is European paper with watermark of eagle. There is no page number, but it has catchword under the text. There is no cover attached in he manuscript. The text was written in prose using black and red ink. The manuscript con ... • ### Arabic grammar [18th century] File Ref: EAP329/1/10 The paper used in the manuscript is European paper with watermark of three crescents, and chain and laid lines. There is no page number, but it has catchword under the text. There is no cover attached to the manuscripts. The text was written in prose using ... • ### Mau'izah and Adab al-Muta'allim [18th century] File Ref: EAP329/1/11 The paper used in the manuscript is European paper with watermark of picador. There is no page number, but it has catchword under the text. There is a cover attached in the front of the manuscript. The text was written in prose using black and red ink. The ... • ### Interpretation on the prophet tradition [18th century] File Ref: EAP329/1/12 The paper used in the manuscript is European paper with chain and laid lines. There is no page number, but it has catchword under the text. There is no cover attached only in the manuscript. The text was written in prose using black and red ink. The manusc ... • ### Text concerning Islamic law [18th century] File Ref: EAP329/1/13 The paper used in the manuscript is European paper with with chain and laid lines. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The manusc ... • ### Sufic text [18th century] File Ref: EAP329/1/14 The paper used in the manuscript is European paper with watermark of fleur-de-lis. There is no page number, but it has catchword under the text. There is a cover attached to the manuscript. The text was written in poetry form using black ink. The manuscrip ... • ### Arabic grammar [18th century] File Ref: EAP329/1/15 The paper used in the manuscript is European paper with countermark of ALL INGLESH. There is no page number, but it has catchword under the text. There is no cover attached to the manuscript. The text was written in prose using black and red ink. The first ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615033626556396, "perplexity": 4670.643788251133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00015.warc.gz"}
http://www.khanacademy.org/math/differential-calculus/taking-derivatives/derivative_intro
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Introduction to derivatives 11 videos 2 skills Discover what magic we can derive when we take a derivative, which is the slope of the tangent line at any point on a curve. Derivative as slope of a tangent line VIDEO 15:43 minutes Understanding that the derivative is just the slope of a curve at a point (or the slope of the tangent line) Tangent slope as limiting value of secant slope example 1 VIDEO 5:25 minutes Tangent slope as limiting value of secant slope example 2 VIDEO 6:05 minutes Tangent slope as limiting value of secant slope example 3 VIDEO 3:46 minutes Tangent slope is limiting value of secant slope PRACTICE PROBLEMS Calculating slope of tangent line using derivative definition VIDEO 8:28 minutes Calculus-Derivative: Finding the slope (or derivative) of a curve at a particular point. The derivative of f(x)=x^2 for any x VIDEO 11:05 minutes Calculus-Derivative: Finding the derivative of y=x^2 Formal and alternate form of the derivative VIDEO 4:53 minutes Formal and alternate form of the derivative for ln x VIDEO 5:46 minutes Formal and alternate form of the derivative example 1 VIDEO 5:17 minutes The formal and alternate form of the derivative PRACTICE PROBLEMS Calculus: Derivatives 1 VIDEO 9:24 minutes Finding the slope of a tangent line to a curve (the derivative). Introduction to Calculus. Calculus: Derivatives 2 VIDEO 9:30 minutes More intuition of what a derivative is. Using the derivative to find the slope at any point along f(x)=x^2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9629406929016113, "perplexity": 2148.1366421489415}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122328486.60/warc/CC-MAIN-20150124175848-00150-ip-10-180-212-252.ec2.internal.warc.gz"}
https://pub.uni-bielefeld.de/publication/1608418
# Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models Kauermann G, Opsomer JD (2004) JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS 13(1): 66-89. No fulltext has been uploaded. References only! Journal Article | Original Article | Published | English Author ; Department Abstract This article presents a modified Newton method for minimizing multidimensional bandwidth selection for estimation in generalized additive models. The method is based on the generalized cross-validation criterion applied to backfitting estimates. The approach in particular is applicable to higher dimensional problems and provides a computationally efficient alternative to full grid search in such cases. The implementation of the proposed method requires the estimation of a number of auxiliary quantities, and simple estimators are suggested. Extensions to semiparamatric models and other bandwidth selections are discussed. Keywords Publishing Year ISSN eISSN PUB-ID ### Cite this Kauermann G, Opsomer JD. Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS. 2004;13(1):66-89. Kauermann, G., & Opsomer, J. D. (2004). Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 13(1), 66-89. doi:10.1198/1061860043056 Kauermann, G., and Opsomer, J. D. (2004). Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS 13, 66-89. Kauermann, G., & Opsomer, J.D., 2004. Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 13(1), p 66-89. G. Kauermann and J.D. Opsomer, “Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models”, JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, vol. 13, 2004, pp. 66-89. Kauermann, G., Opsomer, J.D.: Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS. 13, 66-89 (2004). Kauermann, Göran, and Opsomer, JD. “Generalized cross-validation for bandwidth selection of backfitting estimates in generalized additive models”. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS 13.1 (2004): 66-89. This data publication is cited in the following publications: This publication cites the following data publications: ### Export 0 Marked Publications Open Data PUB ### Web of Science View record in Web of Science®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593859076499939, "perplexity": 4685.017723927607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426050.2/warc/CC-MAIN-20170726062224-20170726082224-00699.warc.gz"}
https://betterlesson.com/lesson/446759/dividing-signed-decimals
# Dividing Signed Decimals 7 teachers like this lesson Print Lesson ## Objective SWBAT divide signed decimals using a procedure #### Big Idea Students verify that the signs of decimal quotients follow the same pattern as integer quotients. Students practice dividing signed decimals including the context of rates and equations. ## Introduction 10 minutes I open with the essential question:  Do the signs of decimal quotients follow the same rules as integer quotients?  I'll then ask students to take a minute to fill in the sign of quotients for a dividend and quotient with the same sign and with a different sign. Next, I want to remind students that the rules for signs are not arbitrary or magical; they are based on mathematical properties.  I say remind because this work was done with integer quotients.  Students take a two multiplication facts and write related division facts.  Of course, we have already explored the signs of products.  The resulting division facts confirm that division of decimals is just like the division of integers in terms of the resulting sign.  This exercise speaks to MP3, as students take some of the math they already know and related it to a new task. Now that we have confirmed the signs of quotients, I want to go to a more fundamental check.  The remainder of the lesson focuses on fluency of decimal long division.  Therefore, I present students with 8 various division problems and ask them to set up the long division problem without actually solving for the quotient.  I want to make sure students know how to handle decimals in both the divisor and the dividend. ## Guided Practice 15 minutes Students will now solve 6 problems with my guidance and the help of their partners.  Some students may be more successful with long division if they are given graph or grid paper.  In this case, they must be instructed to write only 1 digit per square.  This grid paper makes it much easier for students to correctly place and align values.  So many of the mistakes in long division come simply from misaligned work! Throughout this section (and in the later sections) I have tried to include problems that are not overly tedious.  I believe only 1 of the problems requires students to complete values beyond the thousandths place.  There is also one answer that has a repeating decimal value. ## Independent Practice and Extension 20 minutes Now students work along.  The first 6 problems mirror the guided practice problems.  The last 4 problems are rate problems.  Students are asked to find the unit rate.  Two of these four problems use very friendly numbers just to remind students that unit rates can be found through division.  I include these problems as a subtle way to review and perhaps extend some of their unit rate work from a previous grade, while getting them prepared for the next unit on ratios and proportional reasoning. The extension consists of several one-step rational number multiplication equations.  Consequently students now get to practice solving equations using division.  This too is a way to prepare the students for a later unit on expressions and equations while practice rational number division. Note:  I have not given a lot of room to show work in the resource.  I may have my students do the work on whiteboards or notebook paper. ## Exit Ticket 5 minutes The exit ticket has 4 problems that (again) are similar to problems students have just finished completing.  I may consider given two points per problem:  1 for the correct sign for the quotient; 1 for the correct value.  This way I assess that students understand that when dividing values with the same sign the quotient is positive, otherwise the quotient is negative.  Yet I also just assess their ability to do long division calculations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899769186973572, "perplexity": 1695.886064307165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00547.warc.gz"}
https://quant.stackexchange.com/questions/22092/estimate-beta-of-capm-from-implied-volatility
# Estimate Beta of CAPM from Implied Volatility? In the CAPM theory Beta of asset $i$ are estimated in this way: $\beta_i = \frac{\sigma_{im}}{\sigma^2_m}$ where $\sigma_{im} = \rho_{im} \sigma_i \sigma_m$ But all these data are historical data. So, I'm wondering what if I use • $\sigma^2_m$ <- Implied volatility of SP500 (VIX) • $\sigma_{im}$ <- implied volatility for the asset $i$ using the at-the-money call option with a 1-month maturity. • $\rho_{im}$ will be statistically estimated. This way is a better estimation of the $\beta_{i}$ for the next month? • There's a number of papers on using option-implied betas to explain the stock returns. – John Dec 3 '15 at 23:08 • There are methods of calculating option implied correlation as well for certain equities. See here: cboe.com/micro/impliedcorrelation/… – Kevin Pei Dec 3 '15 at 23:24 • @sparkle, it would be very good to get your feedback on the answer below. – phdstudent Feb 2 '16 at 15:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127771615982056, "perplexity": 1778.702735288622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527518.38/warc/CC-MAIN-20190419081303-20190419103303-00453.warc.gz"}
https://www.physicsforums.com/threads/a-change-of-basis-problem.243694/
# A Change of Basis Problem 1. Jul 6, 2008 ### e(ho0n3 The problem statement, all variables and given/known data In $\mathcal{P}_3$ with basis $B = \langle 1 + x, 1 - x, x^2 + x^3, x^2 - x^3 \rangle$ we have this representation. $$\text{Rep}_B(1 - x + 3x^2 - x^3) = \begin{pmatrix} 0 \\ 1 \\ 1 \\ 2 \end{pmatrix}_B$$ Find a basis $D$ giving this different representation for the same polynomial. $$\text{Rep}_D(1 - x + 3x^2 - x^3) = \begin{pmatrix} 1 \\ 0 \\ 2 \\ 0 \end{pmatrix}_D$$ The attempt at a solution I've noticed that $$1 - x + 3x^2 - x^3 = 1 - x + x^2 + x^3 + 2(x^2 - x^3)$$ so the first and third component of $D$ could be $1 - x + x^2 + x^3$ and $x^2 - x^3$ respectively. I can guess a possible second and fourth component and then check $D$ to determine if it is a basis. Is there an easier way of accomplishing this? Can you offer guidance or do you also need help? Similar Discussions: A Change of Basis Problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283661603927612, "perplexity": 401.2182057562602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00476.warc.gz"}
http://math.stackexchange.com/questions/52856/is-noetherian-condition-always-needed-when-speaking-of-a-coherent-sheaf
# Is Noetherian condition always needed when speaking of a coherent sheaf? To be precise, I want to strengthen the second part of Proposition 5.4 Chapter 2 in Hartshorne GTM 52 as follows: Let $X$ be a sheme, then an $\mathcal{O}_X$-module $\mathscr{F}$ is coherent if and only if for every open affine subset $U=SpecA$ of $X$, there is an $A$-module $M$ such that $\mathscr{F}\mid_U\cong\widetilde{M}$, and $M$ be a finitely generated $A$-module. By the defination of coherent sheaf (Hartshorne p.111), it only claims the existence of a cover of $X$ satisfies such property(i.e. "if" part comes by free). If the Noetherian condition can be dropped in this proposition, it can also be dropped in Corollery 5.5, Proposition 5.5(b), 5.11(c) etc. - I quote Hartshorne (p. 111): "Although we have just defined the notion of quasi-coherent and coherent sheaves on an arbitrary scheme, we will normally not mention coherent sheaves unless the scheme is noetherian. This is because the notion of coherence is not at all well-behaved on a nonnoetherian scheme." I think that answers the question in your title. As for your actual question, I'm afraid I don't have a clue. – Zhen Lin Jul 21 '11 at 10:37 Georges points out that the equivalence of categories stated for coherent $\mathscr O_X$-modules/f.g. $A$-modules in 5.5 is false without the noeth. assumption on $A$. But moreover your claim that it can be dropped in the second statement of (5.4) is probably wrong, since the proof makes a detour through noetherian modules, which requires being f.g. over a noetherian ring. The same is true of (5.11c); I sketch a proof in my question math.stackexchange.com/questions/1717367/…. – Owen Barrett Mar 28 at 23:53 Given a scheme $(X,\mathcal O_X)$ and a sheaf $\mathcal F$ of $O_X$-Modules, the following are equivalent: a) There exists a covering $\mathcal U=(U_i)$ of $X$ by open subsets $U_i\subset X$ and $\mathcal O_{U_i}$-isomorphisms $\mathcal F|U_i \simeq \tilde M_i$ for some family of $\mathcal O(U_i)$-modules $M_i$. b) For every affine open subset $U\subset X$ there exists an $\mathcal O(U)$-module $M$ ( namely $M=\mathcal F (U)$) and an $\mathcal O_{U}$-isomorphism $\mathcal F|U \simeq \tilde M.$ This equivalence is a theorem, proved for example in Mumford's Red Book, at the very beginning of Chapter III, in §1 (along with other equivalent characterizations). This has nothing to do with noetherian hypotheses. And now on to coherent sheaves. Recall that a a sheaf $\mathcal F$ of $O_X$-Modules is said to be finitely generated if for every $x\in X$ there exists an open neighbourhood $U$ of $x$ and a surjective sheaf homomorphism $\mathcal O_{U}^r \to \mathcal F|U \to 0$ for some integer $r$. The sheaf $\mathcal F$ is then said to be coherent if it is finitely generated and if for every open subset $V\subset X$ and every (not necessarily surjective !) morphism $\mathcal O_{V}^N \to \mathcal F|V$, the kernel is also finitely generated . Again, no noetherian hypothesis in sight. End of story? Not at all! The problem is that coherence is very difficult to check in general and actually for some schemes, even affine ones, the structure sheaf $O_X$ is not coherent, and in that sad case the concept coherent is essentially worthless . In particular, and this one of your questions, the equivalence of categories mentioned in Corollary (5.5) is FALSE without the noetherian hypothesis. However all troubles evaporate if you assume that $X$ is locally noetherian. You then have the wonderful equivalence (implying of course that the structure sheaf $O_X$ is coherent) $$\mathcal F \;\text {coherent} \stackrel {X \text {loc.noeth.}}{\iff} \mathcal F \; \text {finitely generated and quasi-coherent }$$ Edit I have tried to evade the issue, but since Li explicitly asks: Yes, Hartshorne's definition is incorrect. Here is what I mean. The notion of coherent sheaf was introduced by Henri Cartan in the theory of holomorphic functions of several varables around 1944. In 1946 Oka proved that $\mathcal O_{\mathbb C^n}$ is coherent and this is a very difficult theorem, not following at all from Cartan's definition, the one I reproduced above. In 1955, as is well known, Serre introduced coherent sheaves into Algebraic Geometry in his famous article Faisceaux Algébriques Cohérents and used the exact same definition as Cartan, as acknowledged in his Introduction. Coherent sheaves were then defined in EGA for schemes and ringed spaces, always with Cartan's definition above. Ditto for the generalized analytic spaces (with nilpotents) introduced by Grauert (influenced by Grothendieck) around 1960. And that definition is also the one used in De Jong and collaborators's recent monumental online Stacks Project. So the definition I reproduced above is the one adopted by the founders and in the foundational documents. To change it would be, in my opinion, very misleading and might for example induce one to believe that very profound theorems are trivial. Or worse, induce mistakes by inappropriately applying results from texts using the standard definition of "coherent sheaf". Incidentally, Mumford very elegantly solves the definition problem: he only defines "coherent" in the noetherian case since he only only uses the notion in that case! - I guess your defination of coherent sheaf is different from Hartshorne: X is a scheme, a sheaf of $\mathcal{O}_X$-module $\mathcal{F}$ is coherent if X can be covered by open affine $U_i=SpecA_i$, such that for each i there is a finitely generated $A_i$-module $M_i$ with $\mathcal{F}\mid U_i \cong \tilde{M_i}$.I am pretty sure at present time, the Noetherian condition can be dropped if using Hartshorne's defination. Vakil wrote in his note: it is common in the later literature to incorrectly define coherent as finitely generated. So whether Hartshorne's defination is the $incorrect$ one? – Li Zhan Jul 22 '11 at 12:40 Dear Li, to put it bluntly: Yes, Hartshorne's definition is incorrect. I have written an edit. – Georges Elencwajg Jul 23 '11 at 13:39 Thank you for the comprehensive explanation! I appreciate it! – Li Zhan Jul 24 '11 at 3:09 This is not really an answer as much as a reference. Ravi Vakil's notes treats the notion of coherence, finite presentation, and finite generation in more general cases then just the Noetherian case. The important fact (as mentioned in Georges post above) is that all these conditions are equivalent on an affine Noetherian neighborhood. Here is a link to the notes: http://math.stanford.edu/~vakil/216blog/FOAGjun2711publicimperfect.pdf Take a look at chapter 14. - +1 for the link, Lalit. I keep forgetting that there are users who don't know these notes yet: they are in for a very, very nice surprise! – Georges Elencwajg Jul 21 '11 at 18:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559413194656372, "perplexity": 404.16144502153537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862134822.89/warc/CC-MAIN-20160428164854-00096-ip-10-239-7-51.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/49072/is-changing-lineskiplimit-to-some-negative-value-a-good-idea-and-what-the-valu
# Is changing \lineskiplimit to some negative value a good idea, and what the value might be? I'm writing a (large) document using Linux Libertine for text and Asana for math. I have a 11pt font with `\baselineskip=14pt`. However, not so seldom it happens that the default settings of `\lineskip=1pt` and `\lineskiplimit=0pt` cause lines (with some math and sub/superscripts, of course) to be further apart than usual (and I don't like it, especially since I want to have grid typesetting (which I achieve in my set of macros by carefully redefining `\section`s etc.)). My question is: assuming that I carefully proofread the whole thing (which I do), is it possible that I break something else somewhere else by setting `\lineskiplimit` to some negative value? (I mean some non-trivial interactions between various parts of LaTeX.) And if you consider this a good idea, what value would yo recommend? I know this question is a bit vague, so if you have an idea to make it more TeX.SE-conforming;), feel free to edit it/suggest something in the comments. - Good question! While I can't comment on LaTeX (hence a comment and not an answer), I have played around with those in plain-tex. In my experience the situations where, uh, “interesting” things start to happen, are places where there are `\vcenter`s in use under the hood, for example with `\cases`, `\eqalign`, etc. And then with places which use `\openup` (i.e., increase `\lineskip`, `\baselineskip`, but most importantly, `\lineskiplimit`!). Also, it becomes fuzzy (to me, at least) how does TeX choose between `\(base)lineskip` and `\normal(base)lineskip`s when `\lineskiplimit` is negative. –  morbusg Mar 23 '12 at 7:59 I meant between `\lineskip` and `\baselineskip` but I couldn't edit the comment any longer (they've changed that, haven't they?) –  morbusg Mar 23 '12 at 8:06 ``````\lineskiplimit=-\maxdimen This makes TeX think that no lines are too close. Therefore the line spacing defined by `\baselineskip` will be preserved under all circumstances.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836143851280212, "perplexity": 1157.6887361383547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386165000886/warc/CC-MAIN-20131204135000-00070-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/42315-calculating-surface-area.html
# Math Help - Calculating surface area 1. ## Calculating surface area An ingot 80x10x300mm long is cast into a cyclinder 120mm diameter. Calculate its length and total surface area. an open top cyclinder diameter 84mm x 150mm. Any help would be great Cheers Luke 2. An ingot 80x10x300mm long is cast into a cyclinder 120mm diameter. Calculate its length and total surface area. The first step is to calculate the volume of the ingot. This will be the same as the volume of the cylinder. You will need to know that for a cylinder, $V=2\pi rh$. I assume the other information in your post is related to some other question; I can't make head nor tail of it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357110261917114, "perplexity": 1436.591306674027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.20/warc/CC-MAIN-20150521113210-00074-ip-10-180-206-219.ec2.internal.warc.gz"}
https://courses.energyexcursions.com/courses/energize-the-future/lessons/what-is-energy/topic/how-do-we-define-and-measure-energy/
Energy Excursions # How Do We Define and Measure Energy? ## What is energy and why is it important? By definition, energy is the capacity to do work. Energy, derived from various natural resources, can be converted into heat that generates power to heat our homes and turn the lights on. Without energy our quality of life would not be how we know it today. Energy consumption is dramatically increasing around the world everyday as more people desire that same lifestyle quality. Think about the common household appliances such as a refrigerator or television. Also, think about your daily commute to school, whether in a car or school bus. Without energy, none of that would be possible! ## Measuring Energy We use several different units of energy measurement for energy supply and consumption such as: • British Thermal Unit (BTU) • Barrel of Oil Equivalent (BOE) • Joule (J) • kilowatt-hour (kWh) ### British Thermal Unit (BTU) and oilfield units Natural resources such as oil and gas are a chemical energy source. In addition to their portability, these resources are valued for their dense energy or heat content, primarily measured in British Thermal Units (BTU’s), often referred to as oilfield units. A single (1) BTU will heat 1 pound of water by 1℉. In oilfield units, we commonly use MBTU to represent 1,000 BTU and MMBTU to represent 1,000,000 BTU. When referring to measurements of “energy consumption” we often use “quads.” For example, the United States consumes 90 quads of energy annually and one quad of energy is equivalent to 1 quadrillion BTU (1015). ### Barrel of Oil Equivalent (BOE) Another unit of measurement is barrel of oil equivalent (BOE). This unit allows comparison of various primary energy sources by relating them to oil, which is measured in barrels (=42 gallons). For example, the energy produced or consumed by a country from coal may be reported in BOE, allowing an easy comparison to oil production and consumption in terms of energy content. The BOE unit is frequently used to report how much oil or gas (“reserves”) can be extracted from a field during the exploration and production phase. A single BOE is equivalent to 5.8 * 106 BTU. 1 Barrel of Crude Oil = 42 gallons = 5.8 million BTU’s ### Joule (J) and the metric/SI system In the International System of Units (SI), otherwise known as the metric system, we measure energy in joules. By definition, “one joule equals the work done (or energy expended) by a force of one newton (N) acting over a distance of one meter (m).1Wikimedia Foundation. (2022, January 22). Joule (unit). Wikipedia. Retrieved January 23, 2022 from  https://en.wikipedia.org/wiki/Joule Therefore, 1 Joule (J) equals 1 N * m. Here are some handy conversions and approximations to the SI system from other units previously presented: 1 BTU  equals 1055 J. Therefore, 1 quad equals 1.055 ExaJoules (1*1018 J). In the spirit of ‘rule-of-thumb’ engineering analysis, we can recognize that 1 quad is a close approximation to 1 exajoule (EJ) and we can talk about the two as equivalent when we are not worried about precision, for example when talking about energy consumption of countries. ### kilowatt-hour (kWh) and the power industry Energy can be converted into heat to generate power. When measuring energy consumption for power we use kilowatt-hours (kWh) as the unit of measurement: 1 kWh equals 1 kilowatt of power delivered on a time basis of1 hour. We can convert kWh into joules and BTU’s. To convert kWh into J we use several conversion factors to change the unit of kWh to an equivalent quantity expressed with J. This mathematical approach is termed dimensional analysis and is shown in the equation below: $1\ \text{kWh} \times \frac{1000\ \text{Wh} }{\text{kWh} } \times \frac{1\frac{\text{J} }{\text{s} } }{\text{W} } \times \frac{3600\ \text{s} }{\text{h} } =3{,}600{,}000\ \text{J}$ Therefore 1 kwh equals 3,600,000 J. In a second step, we can then convert 3,600,000 J into BTU to determine the equivalent BTU per kwh. The conversion factor needed is that a single BTU is equivalent to 1055 J. Using dimensional analysis, we solve the following equation: $3{,} 600{,} 000\ \text{J} \times \frac{1\ \text{Btu} }{1055\ \text{J} } =3{,} 412\ \text{Btu}$ Thus, we now have some additional conversion factors we can apply to other energy conversion problems we may need to solve in the future: $\begin{array}{c}1\ \text{kWh} =3{,} 600{,} 000\ \text{J} \\ \\ 1\ \text{kWh} =3{,} 412\ \text{Btu} \end{array}$ ## The cost of energy Now that we understand measurements of energy, how much does a BTU of energy cost? On average, a single kwh of energy, equivalent to 3,400 BTU, costs 10 cents ($0.10). In regard to BOE, one barrel (bbl) of oil costs roughly$75.00. What if you had one dollar, how many BTU’s of energy could you purchase? Natural gas = 270,000 BTU/ $1.00 Oil = 77,000 BTU/$1.00 Gasoline = 40,000 BTU/ $1.00 Electricity = 34,000 BTU/$1.00 Power and energy are not the same thing. Power is the rate at which energy is used or work is done. For example, we burn fossil fuels, such as natural gas to generate steam (an example kinetic energy source shown below) that is responsible for turning a turbine and as a result, powering an electricity generator.2U.S. Energy Information Administration (EIA) independent statistics and analysis. How electricity is generated. U.S. Energy Information Administration (EIA). (n.d.). https://www.eia.gov/energyexplained/electricity/how-electricity-is-generated.php The electricity from the generator is then delivered to consumer’s homes through high-voltage transmission lines. ## Electricity #### Correct. In the next topic page, let’s consider why that energy source is the most expensive on the list.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696892857551575, "perplexity": 2089.4079673735464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00777.warc.gz"}
https://learn.careers360.com/ncert/question-a-straight-horizontal-conducting-rod-of-length-045-m-and-mass-60-g-is-suspended-by-two-vertical-wires-at-its-ends-a-current-of-50-a-is-set-up-in-the-rod-through-the-wireswhat-will-be-the-total-tension-in-the-wires-if-the-direction-of-current-is-revers/
21.(b) A straight horizontal conducting rod of length 0.45 m and mass 60 g is suspended by two vertical wires at its ends. A current of 5.0 A is set up in the rod through the wires. What will be the total tension in the wires if the direction of current is reversed keeping the magnetic field same as before? (Ignore the mass of the wires.) $g = 9.8 m s ^{-2}$ If the direction of the current is reversed the magnetic force would act in the same direction as that of gravity. Total tension in wires(T)=Gravitational force on rod + Magnetic force on the rod $\\T=mg+BIl\\ \\T=0.06\times 9.8+0.261\times 5\times 0.45\\ \\T=1.176\ N$ The total tension in the wires will be 1.176 N. Related Chapters Preparation Products JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- Rank Booster NEET 2021 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 13999/- ₹ 9999/- Knockout JEE Main April 2021 (Easy Installments) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- Knockout NEET May 2021 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 22999/- ₹ 14999/-
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128294348716736, "perplexity": 2507.5660672302783}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00340.warc.gz"}
https://authorea.com/users/339754/articles/465906-on-boundary-exact-controllability-of-one-dimensional-wave-equations-with-weak-and-strong-interior-degeneration?commit=48a485ee20a4ff6a4b0957f835e4eda10edcb189
ON BOUNDARY EXACT CONTROLLABILITY OF ONE-DIMENSIONAL WAVE EQUATIONS WITH WEAK AND STRONG INTERIOR DEGENERATION • Günter Leugering, • Peter Kogut, • Olga Kupenko Günter Leugering University Erlangen-Nuremberg Author Profile Peter Kogut Dnipropetrovsk National University Author Profile Olga Kupenko National Technical University ”Dnipro Polytechnics” Author Profile #### Peer review status:POSTED 26 Jun 2020Submitted to Mathematical Methods in the Applied Sciences 04 Jul 2020Assigned to Editor 04 Jul 2020Submission Checks Completed ## Abstract In this paper we study exact boundary controllability for a linear wave equation with strong and weak interior degeneration of the coefficients in the principle part of the elliptic operator. The objective is to provide a well-posedness analysis of the corresponding system and derive conditions for its controllability through boundary actions. Passing to a relaxed version of the original problem, we discuss existence and uniqueness of solutions, and using the HUM method we derive conditions on the rate of degeneracy for both exact boundary controllability and the lack thereof.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8551546335220337, "perplexity": 2328.418983636816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738555.33/warc/CC-MAIN-20200809132747-20200809162747-00077.warc.gz"}
https://www.physicsforums.com/threads/evaluating-a-surface-integral.360167/
# Homework Help: Evaluating a surface integral 1. Dec 3, 2009 ### epyfathom Find and evaluate numerically x^10 + y^10 + z^10 dS x^2 + y^2 + z^2 =4 It says you're supposed to use gauss' divergence thm to convert surface integral to volume integral, then integrate volume integral by converting to spherical coordinates... I can do the second part but how do i use gauss' thm...? my prof was really bad at explaining this. Thanks. 2. Dec 3, 2009 ### HallsofIvy Well, I would think that the first thing you would do is look up "Gauss' theorem" (perhaps better known as the "divergence theorem"). According to Wikipedia, Gauss' theorem says that $$\int\int\int (\nabla\cdot \vec{F}) dV= \oint\int \vec{F}\cdot\vec{n}dS$$ where $\vec{n}$ is the normal vector to the surface at each point. Here, you are not given a vector function but, fortunately, Wikipedia also notes that "Applying the divergence theorem to the product of a scalar function, f, and a non-zero constant vector, the following theorem can be proven: $$\int\int\int \nabla f dV= \oint\int f dS$$" So, since you are asked to use Gauss' theorem to evaluate a surface integral, you are intended to find $\nabla f$ and integrate that over the region- the ball of radius 2. Then- first step- what is $\nabla (x^{10}+ y^{10}+ z^{10})$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737748503684998, "perplexity": 587.6363437152315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00251.warc.gz"}
http://tex.stackexchange.com/questions/28781/texlive-how-to-determine-which-package-a-font-is-contained-in?answertab=votes
# texlive: How to determine which package a font is contained in? I've written a script that repeatedly runs lualatex file.tex to install missing packages, but I'm having a hard time doing the same with missing fonts. When an error message like ! Font \OT1/ppl/m/n/9=pplr7t at 9pt not loadable: metric data not found or bad. comes up, I would like to determine the package containing ppl. Is there a way to do this algorithmically? (For example, when there were missing x.sty or x.cls files, I was able to run tlmgr search --file x.sty and parse out the package it was in. If there is a ppl.some_extension file that every font has and that tlmgr could search for, that would be perfect.) The script can be found here. - Well your message says that pplr7t.tfm is missing, so if tlmgr is able to find single files in packages tlmgr search --file pplr7t.tfm should work (I have miktex so I can't know). - I think this may be it (at least for the above class of error). Can anyone confirm that all fonts have a .tfm file like the above? –  scallops Sep 18 '11 at 14:56 All fonts which you can use with pdflatex must have a tfm. With luatex/xetex they are no longer needed: both engines are able to extract the tfm informations on-the-fly from various font types. But with font names like the above (a short mix of numbers and chars) you can safely assume that a real tfm file is needed. –  Ulrike Fischer Sep 18 '11 at 15:31 Thanks! I've added this algorithm to the script. –  scallops Sep 19 '11 at 1:03 The message relates to a particular make, style, face and size of font for which the font metric information is not available. The name (in this case pplr7t) is a shortform of the longer name and is referred to as a 'Berry name'. You can find how the name is constructed in the file fontname.pdf' (texdoc fontname.pdf on a TeX Live system). In this case the request is for an Adobe Palatino font ('p'=adobe, 'pl'=Palatino, 'r'=regular Roman). The Berry name is formed by (quoting from fontname.pdf Here is the basic scheme (the spaces here are merely for readability): S TT W [V...] [N] [E] [DD] where S represents the supplier of the font. TT represents the typeface name. W represents the weight. V. . . represents the variant(s), and is omitted if both it and the width are normal. Many fonts have more than one variant. N represents the encoding, and is omitted if the encoding is nonstandard. Encodings are subsumed in the section on variants (see Section 2.4 [Variants], page 20). E represents the width (“expansion”), and is omitted if it is normal. DD represents the design size (in decimal), and is omitted if the font is linearly scaled. The lack of the font can be because it is not installed on your system but requested (either by a package or by your own commands) or because the font is installed but the particular combination of face, weight, size, etc. is not available (e.g. many fonts do not support semi-bold and hardly any support italic small capitals) or because what is looked for is installed but there is a problem with the TeX font metrics file (.tfm) or the font map file (.map), or because the commands mktexlsr (texhash) and updmap-sys were not run or not run correctly when the font was installed (these are needed to correctly set up a updmap.cfg file that lists all the font maps available to the user or on the system). There are many questions and answers on the site covering these specific points in detail. - This is very enlightening, thanks. However, assuming that the error is simply due to our not having installed the font before, is there an algorithmic way to go from the error message and do so? I suppose looking up "pl" ==> "palatino" could be automated, but after that I don't know how I would find its package (I currently have to search for packages with promising descriptions - and that's clearly out of the question for a script.) –  scallops Sep 18 '11 at 9:13 There may not be a package for the font and, even when there is, there may be a need to install both the package and the font (for example, there may be a package in CTAN to allow use of a commercial font so you could install the package but would also need to licence and install the font. Adobe Minion is such a commercial font with a CTAN package.) –  mas Sep 18 '11 at 16:20 Aye, I ended up realizing that the font situation was a lot messier than I'd thought. My script tries to be smart, but just ends up "brute forcing" if all else fails. –  scallops Sep 18 '11 at 16:49 ppl is the abbreviation of Palatino. I suppose that you are missing the lines \usepackage{tgpagella} \usepackage{fontspec} or alternetively for LuaLaTeX (the better choice) \documentclass{article} \usepackage{fontspec} \setmainfont{TeX Gyre Pagella} \begin{document} Text in Palatino \end{document} ` - While I was asking about a more general issue, thank you for the in-document instructions! –  scallops Sep 18 '11 at 9:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959698498249054, "perplexity": 2149.9946248230654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989507.42/warc/CC-MAIN-20150728002309-00315-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41467-021-25861-4?error=cookies_not_supported
## Introduction Recent years have seen tremendous advances in understanding nonlinear complexity through studies in optical systems that allow real-time observation of ultrafast instabilities. For example, studies in optical fibres have yielded new insights into the modulation instability of the nonlinear Schrödinger equation (NLSE)1,2,3,4,5, improving our understanding of noise-driven nonlinear dynamics, and stimulating new approaches to classify localised soliton structures using machine learning6. Other studies have focused on instabilities in dissipative soliton lasers, where the laser operation is governed by the balance between nonlinearity and dispersion, and energy input and loss7,8,9,10,11,12. Such lasers are usually configured to produce highly regular pulse trains of ultrafast solitons13,14, but they can also exhibit a range of more complex temporal and spectral characteristics. Specifically, the coupling between nonlinearity and dispersion in the cavity can result in instabilities arising from the intrinsic chaotic dynamics of NLSE and NLSE-like systems15,16. The particular dynamics that are observed depend on the dimensionality of the system under study17, and for certain laser designs, it has been possible to see clear signatures of low-dimensional nonlinear dynamics such as the development of complex temporal pattern formation18, and bifurcation routes to chaos19. More commonly, however, fibre lasers possess a very large number of degrees of freedom so that instabilities are high-dimensional such that the particular operating point of stable pulse train generation can be viewed as an attractor in a multi-dimensional parameter space. In some cases, variation of the cavity parameters about these stable points can induce transition into unstable regimes involving interactions between a small number of circulating pulses (typically 1–10) in the cavity. Studying these instabilities with real-time measurement techniques has led to improved insight into processes such as soliton molecule coupling20,21,22, complex temporal pattern formation in lasers23,24,25, soliton explosion and rogue wave emergence24,26, and complex intermittence27,28. In fact, from a more general perspective, these regimes are neither pure mode-locked pulses, nor continuous-wave generation. These highly complex lasing regimes are characterised by the co-existence of both localised nonlinear structures and linear dispersive waves. In addition to regimes involving only a small number of interacting pulses, a more complex multiscale laser instability has been shown to involve a much greater number (100–1000) of ultrafast pulses evolving randomly underneath a much broader envelope. This multiscale regime of noise-like pulse operation was first discovered by Silberberg et al. in 199729, and has since been seen in a wide range of different laser configurations, and with both normal and anomalous dispersion30. The majority of studies have typically focused on instabilities generating spectral bandwidths of 10s of nm31,32,33,34,35,36,37, but in a highly nonlinear regime with a significant spectral broadening in the cavity, bandwidths much greater than the gain bandwidth (100s of nm) have been observed38,39,40,41. In addition to their clear interest from a dynamical systems perspective, such lasers have found several important applications30,42. Significantly, while some applications such as tomography and metrology explicitly build on the low temporal coherence of such sources43, applications in material processing exploiting the burst-like nature of the pulsed output have also been demonstrated44. Somewhat paradoxically, the large number of experimental studies of noise-like pulse lasers under so many different conditions has made it difficult to clearly identify the underlying physics. Also, the multiscale nature of the laser operation has not always been appreciated in experiments measuring only the envelope or burst output characteristics. Nonetheless, the underlying role of NLSE-like instabilities has been suggested from numerical studies, using both cubic–quintic Ginzburg–Landau equation modelling45,46,47 and iterative cavity simulations30,31. Further modelling48,49 has revealed how these dynamics can lead to rogue wave statistics, confirming previous experiments in the narrowband noise-like pulse regime34,35,50. Significantly, the results of these previous studies have now extended traditional notions of laser operation beyond average dynamical models to include concepts such as strong dissipation, regenerative saturable absorption51, random lasing52, and intracavity turbulence53. The study of turbulent behaviour in lasers is a subject of particular interest, and can be physically interpreted as a consequence of the large number (~104–106) of interacting frequency modes underneath the evolving broadband field54,55,56. Moreover, linking irregular dynamics to turbulence is also interesting from the perspective of understanding intracavity extreme events, which have already been observed in single-pass experiments57,58. A particular challenge is to understand the dynamics of noise-like pulse lasers generating the broadest bandwidths, because the presence of highly nonlinear fibre in the cavities used suggests a major role played by supercontinuum broadening. In this case, any modelling in a laser context is computationally very expensive, and measuring these multiscale instabilities in the laboratory is also extremely challenging. These different factors clearly represent a serious limitation when a full understanding of such a rich dynamical laser system is of evident interest from both fundamental and applied perspectives. Here, we report a combined numerical and experimental study of an extreme dissipative soliton noise-like pulse laser generating an output supercontinuum spectrum of ~1000 nm bandwidth, and with intracavity spectral width varying two orders of magnitude over one roundtrip. Our stochastic numerical simulations allow us to identify the origin of the laser instability as due to the sensitivity to noise of nonlinear soliton dynamics, particularly multiple interactions and collisions between incoherently evolving sub-picosecond Raman solitons. Our experiments use time and frequency-domain techniques to characterise the multiscale dynamics, and our simulations reproduce quantitatively the supercontinuum broadening, and the probability distributions of temporal and spectral fluctuations, including rogue wave events. ## Results ### Modelling and dynamics We first use numerical modelling to illustrate the general features of the noise-like pulse instability regime. Figure 1 shows the dissipative laser system upon which our modelling is based. Typical of dissipative soliton systems, we consider a unidirectional ring cavity consisting of a number of discrete segments where the intracavity field experiences qualitatively different evolution9,13. The modelling uses an iterative map approach describing each cavity element by a suitable transfer function. For a given set of system parameters, the model seeks convergence to a particular operating state after injection of an initial seed (see ‘Methods’ section). Although scalar models have been shown to reproduce aspects of dissipative soliton laser dynamics qualitatively59, for quantitative comparison with experiments we implement a more complete approach based on coupled generalised vector nonlinear Schrödinger equations (NLSE). This was found essential to obtain quantitative agreement with experiments. Indeed, it is important to stress that although mean-field models such as those based on the Ginzburg–Landau equation are able to reproduce general features of dissipative soliton lasers46, the iterative cavity approach is necessary when describing a system with such dramatic variation in evolution between the different cavity segments. The simulations use a standard approach, with initiation from a low amplitude noise seed60 injected at point A before the EDF. The intracavity field then evolves over multiple roundtrips until convergence to a particular operating state. For a stable state, the spectral and temporal field characteristics at any point in the cavity reproduce themselves after one roundtrip, whilst in a noise-like pulse state, the temporal and spectral characteristics at any point fluctuate significantly with roundtrip but the energy nonetheless has a well-defined mean. Typical energy fluctuations after the build-up to the noise-like pulse regime are ~10%, and the physical origin of this behaviour is the chaotic nature of the NLSE and gain/loss dynamics when seeded by noise. In our experiments, segment AB consists of 11 m of Erbium-doped fibre (EDF), segment BC consists of 2.87 m of standard single-mode fibre (SMF), segment CD consists of 10.3 m of highly nonlinear fibre (HNLF), and segments DE and FA consist of 4.45 m and 7.80 m of SMF. A 28.1 cm bulk-optics free space segment EF includes a nonlinear-polarisation rotation-based saturable absorber (using waveplates and a polarisation beamsplitter)61, and a narrowband spectral filter to control the bandwidth of the pulses reinjected into the EDF13,62. The cavity uses non-polarisation-preserving fibre, and the repetition rate is 5.59 MHz (roundtrip time of ~179 ns.) The dispersion and nonlinearity parameters of the fibres used are given in the ‘Methods’ section. The loss due to splicing and output coupling was ~6 dB (with primary output coupling of 40% in segment DE), and the total energy loss at the spectral filtering stage is ~10 dB (associated with the bandwidth reduction from the supercontinuum spectrum at the HNLF output to the much narrower bandwidth at the EDF input). Our study of this particular design is motivated by the need to understand noise-like pulse dynamics in the broadband regime. Specifically, the majority of previous studies of noise-like pulse lasers have focussed on narrowband systems with 10s of nm bandwidth30, where modelling and simulations have shown how the dynamics arise from the interaction between self-phase modulation and group-velocity dispersion in the cavity13,30,45,46,47, with only minor contributions from higher-order effects. With the addition of anomalous dispersion HNLF, however, the dynamics change qualitatively and quantitatively with the processes of incoherent soliton fission and the Raman soliton self-frequency shift combining to dominate the intracavity spectral broadening63,64. Moreover, in contrast to other dissipative soliton laser designs that can exhibit both stable soliton and noise-like pulse operation65, the use of such a long length of HNLF results in this system operating only in the noise-like instability regime, irrespective of the laser gain or waveplate orientations. In the simulation results that follow, we therefore focus on the dynamics of this unstable broadband operating state with parameters corresponding to our experiments, although we refer when appropriate to Supplementary Information which shows simulations for other parameters in order to clarify certain features of the underlying physics. We begin by showing typical simulation results in the noise-like pulse regime for our experimental parameters as above. Figure 2a shows typical simulated spectral and temporal evolution over one cavity roundtrip, after build-up when the pulse has entered the regime of constant mean energy. These results were obtained after scanning the simulation parameters to obtain energy and noisy envelope durations comparable to the experiment (see ‘Methods’ section), and correspond to a mean intracavity energy (over 1000 roundtrips) at the EDF output of 10.5 nJ. We plot the total intensity of both polarisation components (see ‘Methods’ section). The different propagation steps A–F refer to the different points in the cavity shown in Fig. 1. In the frequency domain, Fig. 2a shows how the spectral characteristics vary significantly over one roundtrip. We see amplification in the EDF (segment AB), dramatic supercontinuum spectral broadening in the HNLF (segment CD), and the strong effect of spectral filtering in the bulk segment EF. Indeed the spectral extent varies by two orders of magnitude over one roundtrip, from ~10 nm FWHM immediately after the spectral filter, to a supercontinuum with spectral components spanning ~1000 nm at the HNLF output. In segment DE, the reduced nonlinearity of the SMF (coupled with reduced power due to output coupling) yields essentially linear propagation without additional spectral broadening. As we will see in Figs. 3 and 4, the shot-to-shot spectra exhibit significant fine structure, but this is not apparent in Fig. 2 because of the false-colour visualisation used. The associated time-domain evolution in Fig. 2a reveals significant variation in the sub-picosecond temporal structure over one roundtrip, while the slower ~300 ps envelope remains largely unchanged. The simulations also reveal the presence of clusters of solitons under the envelope (see for example timebase values in the range ~[−150, −50] ps), an effect previously suggested by low-resolution temporal measurements, and associated with an additional scale of time-domain structure66. Note that we also confirm the presence of these clusters in our experiments below. After the HNLF propagation phase, the linear evolution in the subsequent SMF segment DE is associated with dispersive broadening of the solitons formed in the HNLF. Note that the apparent temporal refraction of the pulse trajectories across the HNLF-SMF boundary (point D) arises from the differences in group-velocity dispersion in the two fibres. We also see how the spectral filtering in the bulk segment EF has a significant effect on the temporal evolution by removing all field components outside the filter bandwidth, particularly the long-wavelength shifted highest intensity Raman solitons. To more clearly show the temporal evolution after this filtering step, the intensity colourmap in segment FA has been scaled by a factor of 10×. Figure 2b provides an expanded view of the evolution in the HNLF segment CD over a 40 ps time window, together with the input and output intensity profiles. These results clearly reveal the dominant physics associated with the temporal compression of the TFWHM ~ 500 fs random pulses at the input to yield ejection of strongly localised sub-100 fs pulses that then undergo typical supercontinuum dynamics of soliton collisions and Raman shifting to longer wavelengths67. Indeed, using P0 ~ 70 W and T0 = TFWHM/1.76 ~ 300 fs as estimates of the mean peak power and duration of the random pulses at the input to the HNLF, the corresponding soliton number is $$N={(\gamma {P}_{0}{T}_{0}^{2}/| {\beta }_{2}| )}^{1/2}\approx 4.5$$, supporting this interpretation68. Associated with these multiple soliton dynamics in the HNLF is the generation of multiple dispersive waves at shorter wavelengths extending to below ~1100 nm, as seen clearly in the spectral evolution in Fig. 2a. We note here that the evolution of the Raman soliton trajectory in Fig. 2a appears to show spectral broadening, but this is in fact an artefact associated with the fact that we are plotting spectral evolution against wavelength. When plotted against frequency, the soliton bandwidth remains constant once it has clearly separated from the central spectral region. On the other hand, soliton evolution in the presence of the Raman effect and higher-order dispersion can exhibit complex accelerating beam characteristics69, and this is especially apparent in the trajectories of the time-domain solitons seen in the false-colour plot in Fig. 2b. The results in Fig. 2b clearly show that the input field to the HNLF consists of a large number of irregular localised pulses. As a result, the field injected in the HNLF does not excite narrowband processes such as modulation instability11,70,71, but rather the dynamics are dominated by incoherent soliton fission. In this case, the decoherence that develops during propagation arises from the effect of noise on soliton interaction and collisions63,64 and these results highlight the importance of incoherent soliton fission in noise-like pulse lasers. Of course, under certain conditions, both modulation instability and incoherent soliton fission yield similar characteristics with very large numbers of interacting solitons, and in this case, a description in terms of soliton turbulence is more appropriate in both cases54,55,56,70,72,73. This is also the regime where extreme event rogue wave statistics can be observed57,58,74,75. Additional insight into these dynamics is shown in Fig. 3. Here we plot the computed time-frequency spectrogram at the HNLF output, together with the corresponding temporal and spectral intensity profiles (see ‘Methods’ section). Note that we also display the temporal intensity on a logarithmic axis to illustrate the strongly localised burst-like nature of the noise-like pulse emission. The spectrogram reveals a time-frequency structure typical of Raman-dominated supercontinuum dynamics63, highlighting how each of the solitons generates its own dispersive wave component, and also highlighting the prominent dispersive wave tail associated with the longest wavelength soliton. An accompanying animation (Supplementary Movie 1) shows the evolution of the spectrogram over one cavity roundtrip, and is valuable in revealing the specific dynamics occurring in each segment. To confirm the soliton nature of the sub-picosecond peaks in the temporal intensity, the highest peak (A) is shown in expanded view on a logarithmic scale (black solid line), and compared with a hyperbolic secant soliton fit (red dashed line). In fact, since the spectrogram allows us to determine the wavelength associated with each temporal peak, we can readily compute the associated soliton number using wavelength-corrected HNLF nonlinearity and dispersion parameters (see ‘Methods’ section). This analysis yields N ~ 1 for all the strongly localised temporal peaks seen at the HNLF output, further confirming the importance of intracavity soliton dynamics. The Raman self-frequency shift of the ejected solitons yields the long-wavelength spectral extension. And indeed, reaching short wavelengths around 1100 nm would not be possible without the associated dispersive wave generation. However, the presence of Raman scattering is not in itself a necessary condition to observe noise-like pulse operation, and simulations with no HNLF in the cavity still show unstable pulse characteristics for certain parameter regimes. However, without the HNLF, the system operates only in the narrowband regime (see Supplementary Information and Supplementary Fig. 1), as seen in a number of previous studies29,31,32,33,34,35,36,37. The essential design parameter here is of course the HNLF length, but additional simulations show that broadband noise-like pulses can be generated with short lengths of HNLF down to ~1 m (Supplementary Fig. 2). Of course, the role of the Raman soliton dynamics and the overall bandwidth increase with longer lengths of HNLF, as would be expected on physical grounds (Supplementary Fig. 3). The key characteristic of the noise-like pulse regime is of course the dramatic shot-to-shot fluctuations, and to illustrate random field evolution over sequential roundtrips, Fig. 4a, b plot the temporal and spectral characteristics at the input and output of the HNLF respectively for 5 sequential roundtrips (after converging to constant mean energy.) The parameters here are the same as in Figs. 2 and 3 corresponding to the 10.3 m HNLF used in the experiment. These results clearly illustrate the dramatic shot-to-shot variation associated with this operating regime, and the temporal profiles, in particular, show the expected characteristics of the noise-like pulse regime. Specifically, the temporal profiles (shown over a 500 ps span within the 1.3 ns computation window—see ‘Methods’ section) clearly show the large number (~300) of chaotic soliton peaks whose intensities vary dramatically from shot-to-shot. The spectral characteristics also show significant fluctuating fine structure. Note that for the spectral results, we also plot the mean spectrum computed over a much larger number of roundtrips (1000) to highlight the fact that these fluctuations are not at all apparent using average spectral measurements. Comparing Fig. 4a, b further highlights the extent of the HNLF spectral broadening. ### Experiments and comparison with simulations Both time-averaged and statistical predictions of our numerical model have been confirmed using experiments. The overall design of our laser system has been described above (and in Fig. 1). The EDF was pumped at 976 nm, with a pump power threshold of 40 mW for pulsed operation, although we did not observe stable mode-locked operation at any pump power or waveplate orientation28. On the other hand, over a wide range of pump powers up to 500 mW, suitable adjustment of the waveplates yielded noise-like pulse operation with chaotic envelope emission at the cavity repetition rate, and broadband spectral output. The results reported below at a pump power of 225 mW are typical of this regime. For this value of pump, the average power at the primary output in branch DE was 13 mW (see ‘Methods’ section). We used a range of diagnostics to characterise the laser output. The real-time shot-to-shot characterisation was possible for the ~10 nm (3 dB) bandwidth pulses input to the HNLF (point C in Fig. 1) using several different techniques: direct measurement of overall envelope fluctuations via a fast photodiode, a time-lens system to measure sub-picosecond soliton structure on the circulating pulses, and a dispersive Fourier transform (DFT) setup for spectral measurements. At the HNLF output, the large spectral bandwidth of the supercontinuum pulses exceeded the measurement bandwidths of our real-time devices, but time-averaged measurements of the spectra over the full bandwidth of 1000–2100 nm were performed using an OSA and a near-infra-red spectrometer. In this context, we note that temporal measurements using intensity autocorrelation are of partial utility, as they indicate just the presence of instability through a coherence spike on a broad envelope, and give only an indirect measure of the average sub-structure pulse duration30. Figure 5 shows the instability properties of the laser measured using direct photodetection. Figure 5a plots data from a fast photodiode over a time span of 20 μs to illustrate the emission of highly unstable pulses at the 5.59 MHz repetition rate (period of ~179 ns). Although these measurements do not resolve structure faster than the system response time of ~25 ps, we can still accurately characterise the ~300 ps envelope, and, within the detection bandwidth, we can even observe variation in the envelope structure with roundtrip. Figure 5(b) unwraps the raw photodiode data to plot the roundtrip variation of the envelope, and reveals the presence of sub-cluster structure evolving over ~500–1000 roundtrips. Although these results clearly reveal shot-to-shot instability on the envelope, the temporal resolution is limited, as seen in the selection of intensity profiles plotted in Fig. 5c. However, these results are important, because the sub-clusters seen on the envelope confirm previous studies of similar envelope sub-structure66, and are a clear illustration of random pattern formation and multiscale dynamics in a dissipative soliton laser23,24,25. In particular, although they are unstable, the sub-clusters are relatively long-lived over multiple roundtrips ~100s of μs and they consist of a large number of localised solitons of ~500 fs duration. Of course, the photodiode measurements are unable to resolve both these long-timescale characteristics as well as to directly measure the ultrafast soliton structure, and it is for this reason that we performed time-lens measurements for shot-to-shot characterisation on the sub-ps scale. The time-lens system used is described in the ‘Methods’ section and had a temporal resolution of ~340 fs and a physical (i.e. demagnified) measurement window of ~100 ps. Although the finite acquisition window precludes characterisation across the full ~300 ps pulse envelope, by operating the time-lens asynchronously10, we are nonetheless able to sample the random pulse structure across the envelope over multiple laser roundtrips. Because of the time-lens measurement bandwidth of ~12 nm, real-time characterisation was possible only for the input pulses to the HNLF, but as we shall see, these measurements clearly reveal both the sub-ps pulse structure and the rogue wave statistics expected from the underlying nonlinear dynamics. Figure 6a shows 5 typical results from the time-lens measurements, plotting the temporal profiles (after demagnification) over a 50 ps timebase. For comparison, 5 typical results from numerical simulations are also shown in Fig. 6b, and we clearly see the strong visual similarity between the measured and simulated intensity peaks. To compare these results more quantitatively, Fig. 6c plots the computed probability density functions for the peak intensities computed from the experiment (red) and simulation (black). These density functions were computed from a time series of 21,000 temporal peaks measured experimentally, and 135,000 peaks analysed from simulation. The intensity normalisation was relative to the mean of each respective time series. The inset plots the density functions on a logarithmic scale. We first note excellent agreement between the simulated and experimental results, with simulations reproducing the experimentally measured intensity probability density function over 3 orders of magnitude. The inset also shows a very good agreement between the simulation statistics and a negative exponential fit (green line), where an exponential distribution is expected here on physical grounds given the randomness of the intensity fluctuations in the noise-like pulse regime. Indeed, such negative exponentially distributed statistics have been seen in previous experiments studying statistics of modulation instability and noise-like pulse lasers6,35. In addition to clearly revealing the statistical behaviour of the sub-ps soliton structure of noise-like pulses, these results also strongly confirm the fidelity of the model in reproducing the time-domain properties of the laser system. We also see (from both simulation and experiment) that a significant fraction of the measured intensity peaks exceeds twice the significant intensity threshold (see ‘Methods’ section), thus meeting the statistical criterion to be formally classified as intracavity rogue waves. Indeed, the rogue wave intensity thresholds computed from the experiment ($${I}_{{{{{{{{\rm{RW}}}}}}}}}^{\ \exp }=3.64$$) and simulation ($${I}_{{{{{{{{\rm{RW}}}}}}}}}^{\ {{{{{{{\rm{sim}}}}}}}}}=3.68$$) are in very good agreement, and the experimental threshold is shown in the figure as the dashed blue line. From this data, it is also straightforward to calculate the fraction of temporal peaks satisfying the rogue wave criterion, which yields 0.43% from simulation, and 0.10% from the experiment. When we are in the noise-like pulse regime generating broadband spectra, the temporal probability distribution as shown in Fig. 6c is largely insensitive to small changes in the design parameters. Specifically, performing numerical simulations with small changes (within ± 10%) in parameters such as fR, Esat, g0, and the length of the HNLF have a negligible effect on the temporal pulse characteristics, and thus the negligible effect on the associated probability density function. This is also seen in experiments when we make small changes in pump power for example. We attribute this insensitivity to the fact that our cavity configuration contains a long length of HNLF such that we observe noise-like pulse behaviour at all pump powers above the threshold. Complementing the time-domain measurements in Fig. 5, we were also able to record real-time spectral data. Specifically, the DFT method was used to record spectral fluctuations at the HNLF input, and Fig. 7a shows a false-colour representation of the shot-to-shot variation measured over 1000 roundtrips. The fidelity of the DFT method was confirmed by computing the average over this measurement ensemble and comparing with an averaging optical spectrum analyser. The agreement between these measurements is clear from Fig. 7b, and this figure also shows the average input spectrum obtained from simulations. There is a small difference between the experimental and simulation bandwidth, but this does not influence the interpretation of our results. As we saw in the simulation results in Fig. 4, the shot-to-shot spectra also show a strong sub-structure of spectral peaks. Although analysing spectral peak statistics does not have the same direct physical interpretation in terms of soliton dynamics as analysis of temporal peaks, several studies have characterised spectral peak fluctuations in the context of identifying frequency-domain rogue wave events34,50. From our real-time spectral measurements, we computed peak statistics in a similar way as for the temporal data, and the corresponding probability density function is shown in Fig. 7b. Here we also compare with simulation, but note that the simulated spectra were convolved before peak analysis with a response function to model the spectral resolution of the experimental DFT data (see ‘Methods’ section) The figure plots results on both linear and logarithmic scales, and as with the temporal data, there is a very good agreement. The rogue wave spectral peak intensity threshold computed from the experiment was ($${S}_{{{{{{{{\rm{RW}}}}}}}}}^{\ \exp }=3.22$$) and simulation was ($${S}_{{{{{{{{\rm{RW}}}}}}}}}^{\ {{{{{{{\rm{sim}}}}}}}}}=3.20$$); the experimental threshold is shown in the figure as the dashed blue line. From this data, it is also straightforward to calculate the fraction of spectral peaks satisfying the rogue wave criterion, which yields 0.04% from simulation, and 0.07% from the experiment. These density functions were computed from a time series of 57,500 spectral peaks measured experimentally, and 44,900 peaks analysed from simulation. The intensity normalisation was relative to the mean of each respective time series. Finally, we discuss Fig. 7d, which shows the measured average spectrum at the HNLF output (red curve). The inset uses a logarithmic scale, and plots over an extended wavelength range to show the dispersive wave components by combining measurements from the Anritsu MS9710B OSA and the NIRQuest512 spectrometer (see ‘Methods’ section). The spectrum is highly asymmetric, consisting of a primary spectral peak of ~40 nm bandwidth, an extended long-wavelength tail, as well as a short wavelength dispersive wave structure. The overall span of the measured spectrum approaches ~1000 nm. These experimental results are reproduced in our modelling, where the solid black line shows the simulated average spectrum (computed over 1000 roundtrips) based on the full generalised propagation model. There is very good agreement between experiment and simulation and, given the complexity of the dissipative soliton system that we are modelling, we stress at this point the significance of these results. Specifically, although there have been a large number of distinct experimental and numerical studies of noise-like lasers, our simulations here quantitatively reproduce the observed energy characteristics, the time-averaged broadband spectrum over a span approaching 1000 nm, as well as the intensity statistics of random sub-picosecond soliton structures. Moreover, they clearly show the role of soliton dynamics, allowing us to physically associate the long-wavelength tail with the random evolution dynamics of Raman solitons, and the short wavelength structure from the corresponding dispersive waves. Note that the modulation in the dispersive wave spectral structure arises from cross-phase modulation from the generating soliton pulse76, sometimes described as analogous to event horizon dynamics77. The importance of the Raman dynamics is explicitly seen by comparing experiment to simulations performed in the absence of the Raman contribution (i.e. fR = 0). These are shown as the dashed black line, and we clearly see that the long-wavelength spectral extension, in this case, is absent. Note that the stronger dispersive wave component here arises from more efficient dispersive wave energy transfer when there is no Raman-induced wavelength shifting. ## Discussion Despite the fact that the incoherent noise-like pulse regime of optical fibre lasers has been observed for nearly 25 years, its physics has been understood in only very general terms. Indeed, the very appellation of noise-like is a very generic description, and does not provide information or even any hints about the pulse dynamics or possible underlying instability mechanism. The experiments and modelling reported here have addressed this problem for the particular case of a broadband dissipative soliton laser operating in a highly nonlinear regime. Our experiments reveal multiple ultrafast localised structures with random characteristics typical of soliton turbulence, and our simulations reveal the associated intracavity dynamics of soliton fission, Raman evolution and supercontinuum generation. The simulations predict both time-averaged and statistical properties in quantitative agreement with experiments. From an experimental viewpoint, our results have provided a further example of the great utility of ultrafast real-time measurements in providing new insights into complex nonlinear dynamics in optical fibre systems. Although we have applied real-time characterisation to the particular case of a broadband incoherent dissipative soliton laser, these methods are general, and we expect future experiments to study similar instability mechanisms in regimes of noise-like pulse laser operation with narrower bandwidths. In particular, developing a more complete understanding of the relative importance of modulation instability and incoherent soliton fission in driving irregular cavity dynamics is likely to be an important area of future study. Perhaps most significantly, our results suggest that for this case of broadband instability in a highly nonlinear regime, we can clarify the noise-like pulse regime as one where intracavity supercontinuum dynamics play a dominant role. This work extends our knowledge of dissipative soliton systems, and further highlights the rich dynamics of laser oscillators when operated far from a weakly-perturbative dynamical regime. ## Methods ### Numerical modelling Numerical simulations of laser pulse evolution used an iterative map with appropriate transfer functions for each cavity element78. We write the pulse amplitude as $${{{{{{{\bf{A}}}}}}}}(z,T)=\hat{{{{{{{{\bf{x}}}}}}}}}\ u(z,T)+\hat{{{{{{{{\bf{y}}}}}}}}}\ v(z,T)$$, where u(z, T) and v(z, T) are the field components along the two principal polarisation axes. The general propagation model for each fibre segment was based on the coupled generalised nonlinear Schrödinger equations (GNLSE) given by: $$\begin{array}{ll}&({\partial }_{z}-{{{{{{{\rm{i}}}}}}}}{{\Delta }}{\beta }_{0}/2+{{\Delta }}{\beta }_{1}/2{\partial }_{T}+{{{{{{{\rm{i}}}}}}}}{\beta }_{2}/2{\partial }_{T}^{2}-{\beta }_{3}/6{\partial }_{T}^{3}-\hat{g}/2)u(z,T)=\\ &{{{{{{{\rm{i}}}}}}}}\gamma (1+{{{{{{{\rm{i}}}}}}}}/{\omega }_{0}{\partial }_{T})\left\{(1-{f}_{{{{{{{{\rm{R}}}}}}}}})\left[(| u{| }^{2}+2/3| v{| }^{2})u+1/3v^{2}{u}^{* }\right]+\right.\\ &\left.{f}_{{{{{{{{\rm{R}}}}}}}}}\ u(z,T){h}_{{{{{{{{\rm{R}}}}}}}}}(T)* (| u(z,T){| }^{2}+| v(z,T){| }^{2})\right\}\end{array}$$ (1) $$\begin{array}{ll}&({\partial }_{z}+{{{{{{{\rm{i}}}}}}}}{{\Delta }}{\beta }_{0}/2-{{\Delta }}{\beta }_{1}/2{\partial }_{T}+{{{{{{{\rm{i}}}}}}}}{\beta }_{2}/2{\partial }_{T}^{2}-{\beta }_{3}/6{\partial }_{T}^{3}-\hat{g}/2)v(z,T)=\\ &{{{{{{{\rm{i}}}}}}}}\gamma (1+{{{{{{{\rm{i}}}}}}}}/{\omega }_{0}{\partial }_{T})\left\{(1-{f}_{{{{{{{{\rm{R}}}}}}}}})\left[(| v{| }^{2}+2/3| u{| }^{2})v+1/3{u}^{2}{v}^{* }\right]+\right.\\ &\left.{f}_{{{{{{{{\rm{R}}}}}}}}}\ v (z,T){h}_{{{{{{{{\rm{R}}}}}}}}}(T)* (| u(z,T){| }^{2}+| v(z,T){| }^{2})\right\}\end{array}$$ (2) Here β2 and β3 are the second and third-order dispersion coefficients (assumed identical for each axis), and the weak (bend-induced) birefringence in each segment is included via the parameter Δβ0 = 2π/LB where LB is the beat length. A value of LB = 5 m was used for all segments, consistent with the previous studies31,79,80. The group index term was calculated from the approximation Δβ1 ≈ Δβ0/ω081,82. Jones calculus was used to model the effect of polarisation-selective elements such as waveplates and the polarising beamsplitter in the bulk cavity segment. The usual Kerr nonlinear coefficient for each segment is γ, and higher-order nonlinear effects of self-steepening and Raman scattering were also included. The Raman contribution is included via the convolution (*) with the response function hR(t), which was based on a realistic model for silica67,68. Using a Raman fraction of fR = 0.18 yielded good agreement with experiments, and we neglected orthogonal Raman gain contributions83. We stress that the use of this Raman model was essential to quantitatively reproduce the long-wavelength spectral broadening seen experimentally. We also note that although some previous studies have used a linear Raman approximation84, this cannot accurately describe the dynamics of sub-picosecond pulses85. We also stress here that the Raman fraction fR is not a fitted parameter, but rather is determined from the peak of the Raman gain profile measured in experiment86. Although some small variation in the numerical value of the Raman fraction has been reported, the value of fR = 0.18 has been found to yield good agreement with experiments as suggested in ref. 68. Our modelling used fibre lengths and parameters based on the experimental cavity design. Segment AB consists of 11 m of OFS R37003 Erbium-doped fibre (EDF) with normal dispersion β2 = +40 × 10−3 ps2 m−1, and nonlinear parameter γ = 6.0 × 10−3 W−1 m−1. Third-order dispersion in the EDF was neglected. Standard silica fibre Segments BC, DE, and FA were of lengths 2.87 m, 4.45 m, and 7.8 m respectively, and used SMF-28 parameters β2 = −21.7 × 10−3 ps2 m−1, β3 = +86.0 × 10−6 ps3 m−1, and nonlinear parameter γ = 1.1 × 10−3 W−1 m−1. Although some cavity components (such as wavelength-selective couplers) used short lengths of other silica-based fibre, this was found to have a negligible effect on propagation and was not explicitly included in the modelling. The supercontinuum segment CD models propagation in 10.3 m of OFS highly nonlinear fibre with β2 = −5.23 × 10−3 ps2 m−1, β3 = +42.8 × 10−6 ps3 m−1 (zero-dispersion wavelength of 1408 nm), and nonlinear parameter γ = 18.4 × 10−3W−1 m−1. The net cavity dispersion is +0.06 ps2. Note that all dispersion and nonlinearity parameters above are specified at 1550 nm. The gain term $$\hat{g}(\omega )$$ is non-zero only in the EDF segment, and we model this with a Lorentzian: $$\hat{g}(\omega )=\frac{1}{1+E/{E}_{{{\mbox{sat}}}}}\times \frac{{g}_{0}}{1+{\Omega }^{2}/{\Omega }_{{{\mbox{g}}}\,}^{2}},$$ (3) with g0 the unsaturated small-signal gain, E = ∫(u2 + v2)dτ the intracavity pulse energy, and Esat a gain saturation energy parameter. Ω = ω − ω0 is the detuned angular frequency, and ω0 is the central transition frequency (corresponding to a wavelength of 1550 nm) and Ωg is the gain (half) bandwidth (corresponding to 20 nm). Note that this approach is widely used in the modelling of EDF amplifiers13,87 and is justified physically because gain recovery timescales for an Erbium-doped amplifier are typically ~100s of μs88, orders of magnitude slower than any of the characteristic timescales of our laser dynamics: the roundtrip time (179 ns); the noise-like pulse envelope (~100s of ps); the ultrafast soliton sub-structure (~100s of fs). We also note that because of the spectral filtering in the cavity, the bandwidth of the injected signal into the amplifier is effectively reset to ~10 nm for every roundtrip, which is significantly less than the 40 nm FWHM of the lineshape function. Moreover, there is no appreciable nonlinear spectral broadening in the EDF such that the pulses remain at ~10 nm bandwidth during amplification. This means that gain bandwidth-limiting effects89 are also negligible. Indeed, the fact that we can neglect nonlinear effects in the amplifier is another factor that allows us to focus more on the physics of the dramatic spectral broadening in the HNLF. We also note that our use of a constant distributed gain coefficient g0 is for consistency with previous studies of similar laser systems that have shown good agreement with experiments13. In fact, we explicitly checked that for our parameter regime, the average and statistical results from simulations do not significantly depend on the longitudinal gain model used. Finally, we note that typical polarisation-dependent gain in our parameter regime is expected to be below ~0.3 dB90, which can be neglected compared to our small signal and saturated gain of 35 and 21 dB, respectively, over the 11 m EDF length. Moreover, previous studies of polarisation-dependent gain saturation have also shown a negligible effect on nonlinear dynamics in dissipative soliton lasers91. A bulk-optics free space segment EF (length 28.1 cm) includes a nonlinear-polarisation based saturable absorber61,92, and a narrowband spectral filter to control the bandwidth of the pulses reinjected into the EDF62. The filter transfer function was modelled on a double supergaussian fit to the experimentally measured intensity transmission function and was given by: $$T(\Omega )={c}_{1}\exp (-{({{\Omega }}^{2}/{\Omega }_{1}^{2})}^{{m}_{1}})+{c}_{2}\exp (-{({{\Omega }}^{2}/{\Omega }_{2}^{2})}^{{m}_{2}})$$ with coefficients c1 = 0.7036, c2 = 0.2944, m1 = 1.4483, m2 = 1.0034, Ω1 = 4.2584 × 1012 rad s−1, and Ω2 = 6.8006 × 1012 rad s−1. Linear losses in the cavity originate mainly from splicing and coupling and are considered at points C (0.84 dB), D (3.02 dB) and F (3.0 dB) in the simulation. The loss at point F includes the linear loss of the filter (20%). Numerical simulations were performed using a 1.3 ns time window and 218 = 262,144 points such that the temporal resolution is ~5 fs. The frequency grid corresponds to a wavelength span of ~1019–3238 nm with a frequency resolution of 769 MHz (wavelength resolution ~6 × 10−3 nm around 1550 nm). Although computationally very demanding, this level of time and frequency resolution is necessary to span the full noise-like pulse envelope structure, as well as to capture fine structure in the temporal and spectral domains. The numerical techniques used for the solution of GNLSE-like differential equations are well-known, and examples of numerical code for this purpose are widely available63,68. A particular simulation is initiated at the input to the EDF (point A in Fig. 1a) using a gaussian noise background in the time-domain distributed across the full 1.3 ns time window12,93. The seed energy (distributed between the two polarisation components) was ~3 pJ. Physically, laser operation would be initiated by random amplified spontaneous emission noise from the EDF, but the averaged simulation results and the computed intensity peak statistics were found to be independent of the noise source used. Typically ~102 roundtrips were required for the simulations to converge to the regime with well-defined mean energy, and it was only after entering this regime that statistical analysis was performed. To compare simulations and experiments, we iteratively scanned the simulation parameter space to yield average spectral characteristics that agreed with the experiment (Fig. 7). This procedure yielded a small-signal gain of g0 = 0.73 m−1 and saturation energy of Esat = 3.5 nJ. For the 225 mW pump power used in our experiments, these parameters were comparable to previous similar modelling of dissipative soliton lasers93, and supported by a rate equation analysis of the EDF single-pass gain characteristics88. The simulations yield a mean intracavity energy (averaged over 1000 roundtrips) of 10.5 nJ at the EDF output, compared with the experimental value of 13.6 nJ. Although the exact agreement would not be expected because of the approximate model of the gain lineshape function used, the overall agreement is remarkable between the mean energy, the average broadband spectrum with components spanning ~1000 nm, and computed temporal and spectral statistics. For completeness, we give the waveplate orientations for our simulations as (QWP1, HWP1, HWP2, QWP2) = (5. 4°, 16. 2°, 64. 8°, 27°) although a precise comparison with the experiment is not possible here because of the unknown birefringence orientation of each particular fibre segment in the cavity. ### Computation of the spectrogram The simulations yield access to the amplitude and phase of the intracavity field, allowing us to calculate a frequency (or wavelength)-time spectrogram which clearly shows the intensity and phase content of the pulse in the time and frequency domains. We compute in particular the total field spectrogram S(ω, τ) = Su(ω, τ) + Sv(ω, τ) where the separate component spectrograms are defined by: $${S}_{k}(\omega ,\tau )={\left|\int\nolimits_{-\infty }^{\infty }g(T-\tau )f(T)\exp (-i\omega T){{{{{\mathrm{d}}}}}}T\,\right|}^{2}$$ (4) with k = u, v and f(T) representing either u(z, T) or v(z, T) respectively, the field components along the two principal polarisation axes. The function g(T − τ), a variable delay gate function with delay value τ. The spectrogram trace then plots the spectra of a series of time-gated portions of the field and, especially when plotted against the associated temporal and spectral intensities, it provides a highly intuitive way to interpret the time-frequency structure of a complex field. In our calculation of the spectrogram, we used a gate function of 300 fs duration (full width at half maximum) and a Gaussian profile. Note that the use of the total spectrogram over both polarisations has the clear physical interpretation that it projects naturally onto the total temporal intensity profile and total spectrum. Based on the spectrogram, it is possible to identify the wavelength of each of the Raman-shifted localised temporal peaks in the random HNLF output field, allowing calculation of the associated soliton number using $${N}_{{{{{{{{\rm{p}}}}}}}}}={[\gamma ({\omega }_{{{{{{{{\rm{p}}}}}}}}}){P}_{{{{{{{{\rm{p}}}}}}}}}{T}_{{{{{{{{\rm{p}}}}}}}}}^{2}/| {\beta }_{2}({\omega }_{{{{{{{{\rm{p}}}}}}}}})| ]}^{1/2}$$. Here γ(ωp) and β2(ωp) are respectively the nonlinearity and dispersion parameters at the Raman-shifted peak frequency ωp, and Pp and Tp are respectively the pulse peak power and duration. The corrected dispersion parameter is then β2(ωp) = β2(ω0) + (ωp − ω0) β3(ω0) where ω0 corresponds to a wavelength of 1550 nm. The corrected nonlinearity parameter is then γ(ωp) = (ωp/ω0)γ(ω0)94. ### Experimental setup The laser system in Fig. 1 used a unidirectional cavity configuration with fibre lengths as described above. The overall laser repetition rate is 5.59 MHz (roundtrip time of ~179 ns). The primary laser output after the HNLF (point D) used a 40% coupler, and we used a 1% coupler at point C for diagnostics and a 5% coupler in segment DE for additional spectral measurement. The spectral filter used (Andover 155FSX-1025) had 10 nm bandwidth (FWHM) and 80% peak transmission. The EDF was co-directionally pumped at 976 nm, and noise-like pulsed behaviour was observed at all values of pump power above the 40 mW pump threshold where pulsed laser operation was first observed. In contrast to similar cavity configurations without the HNLF28, we did not observe any stable mode-locked regime for any combination of cavity parameters. With suitable adjustment of the waveplates, it was possible to observe noise-like pulse operation with broadband spectral output over a wide range of pump powers up to 500 mW. The observed spectra exhibited qualitatively similar characteristics over the full range of pump powers, although the broad temporal envelope duration (as measured with a photodiode) increased to ~450 ps at the highest pump powers. The results reported here at a pump power of 225 mW corresponded to 13 mW average power at the primary output after the HNLF (point D). This corresponds to intracavity energy of 7.3 nJ at the HNLF output and an energy at the EDF output of 13.6 nJ when accounting for all coupling and splicing losses between the EDF and HNLF. Direct pulse envelope measurements used a 35 GHz photodiode (New Focus 1474 A) connected to a 20 GHz channel of a real-time oscilloscope (LeCroy 845 Zi-A, 40 GS s−1). The DFT setup used 5.13 km of dispersion compensating fibre (DCF) with group-velocity dispersion coefficient of D = −83.6 ps nm−1 km−1 (β2 = +107 × 10−3 ps2 m−1) at 1550 nm. The signal under test was attenuated before injection into the DCF to ensure linear propagation. The fidelity of the DFT measurements was confirmed by comparing the DFT spectrum with that measured using an averaging OSA (Anritsu MS9710B) for a separate stable mode-locked laser operating around 1550 nm. This comparison was also performed on the average spectra measured at the HNLF input (see results in Fig. 7b). The real-time DFT signal was measured by a 12.5 GHz photodiode (Miteq DR-125G-A) connected to another 20 GHz channel of the real-time oscilloscope, resulting in a spectral resolution of 0.19 nm. For the results in the inset of Fig. 7d, the broadband spectrum from the HNLF is plotted over an expanded wavelength range by combining measurements from the Anritsu MS9710B OSA below 1550 nm and the Ocean Optics NIRQuest512 spectrometer (resolution of 6.3 nm) above 1550 nm. The measured spectra were matched in the region of 1550 nm. The time-lens setup was based on a commercial system (Picoluz UTM-1500) described in ref. 95, supplemented by an additional module of dispersion compensating fibre for additional magnification. A time-dependent quadratic phase was applied on the signal after propagation step D1 by four-wave mixing in a silicon waveguide between the signal and linearly-chirped pump pulses from an integrated fibre laser module (a 100 MHz Menlo C-Fiber Sync, a P100-EDFA and a pre-chirping fibre Dp). The overall system magnification of ×190 was determined experimentally by using Fourier-domain pulse shaping (Finisar Waveshaper 4000 series) to create a picosecond pulse doublet with exactly 10 ps separation, and by measuring the increased temporal separation after passage through the time-lens. The magnification M = D2/D1 is associated with total dispersion for the input and output propagation steps of D1 = 4.16 ps nm−1 and D2 = 790.4 ps nm−1. The time-lens output was then recorded by a 12.5 GHz photodiode (Miteq DR-125G-A) connected to the 20 GHz channel of the real-time oscilloscope at a sampling rate of 40 Gs s−1. The calculated overall time-lens resolution96 was 340 fs, and the demagnified time window was ~100 ps (determined by the duration of the chirped pump pulses into the Silicon waveguide). Since the noise-like pulse envelope duration of ~300 ps is larger than the time-lens window, we operated in asynchronous mode with free-running acquisition triggered by the arrival of the time-lens signal. To avoid a low signal to noise ratio at the edges of the measurement window, all statistical analysis of the time-lens temporal peaks was performed only on the central (70 ps) region of the window. ### Rogue wave criteria Based on the statistical distribution of intensity peaks in both the time and frequency domains, a significant intensity is defined as the associated mean intensity of the upper third of intensity peaks. The temporal and spectral rogue wave thresholds are defined as equal to 2.2 times this significant intensity97. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658375144004822, "perplexity": 1481.9334833590797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.66/warc/CC-MAIN-20230203091020-20230203121020-00323.warc.gz"}
http://mathhelpforum.com/number-theory/185606-property-pythagorean-triples.html
# Math Help - A property of Pythagorean triples 1. ## A property of Pythagorean triples Let $a,b \in\mathbb{N}^+$ such that $\sqrt{a^2+b^2} \in \mathbb{N}$. Show that $\forall n\in\mathbb{N}^+\; (a+ib)^n \notin \mathbb{R}$ Where $i \in \mathbb{C}$ is the imaginary unit, and $\mathbb{N}^+,\mathbb{R, C}$ are the sets of all positive integers, reals and complex numbers respectively. 2. ## Re: A property of Pythagorean triples Originally Posted by elim Let $a,b \in\mathbb{N}^+$ such that $\sqrt{a^2+b^2} \in \mathbb{N}$. Show that $\forall n\in\mathbb{N}^+\; (a+ib)^n \notin \mathbb{R}$ Where $i \in \mathbb{C}$ is the imaginary unit, and $\mathbb{N}^+,\mathbb{R, C}$ are the sets of all positive integers, reals and complex numbers respectively. Hint: Use mathematical induction. 3. ## Re: A property of Pythagorean triples Originally Posted by elim Let $a,b \in\mathbb{N}^+$ such that $\sqrt{a^2+b^2} \in \mathbb{N}$. Show that $\forall n\in\mathbb{N}^+\; (a+ib)^n \notin \mathbb{R}$ Where $i \in \mathbb{C}$ is the imaginary unit, and $\mathbb{N}^+,\mathbb{R, C}$ are the sets of all positive integers, reals and complex numbers respectively. Let $c = \sqrt{a^2+b^2}$ and let $\theta = \arccos(a/c)$. If $(a+ib)^n \in \mathbb{R}$ then $\theta$ will be a rational multiple of $\pi$ with a rational cosine. That can only happen if $\cos\theta\in\{0,\pm\tfrac12,\pm1\}$ (you can find a neat proof of that here). The result then follows quite easily. 4. ## Re: A property of Pythagorean triples Thanks a lot Opalg!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537804126739502, "perplexity": 332.3490096541377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826679.55/warc/CC-MAIN-20140820021346-00433-ip-10-180-136-8.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Definition:P-adic_Valuation
## Definition Let $p \in \N$ be a prime number. ### Integers The $p$-adic valuation (on $\Z$) is the mapping $\nu_p^\Z: \Z \to \N \cup \left\{{+\infty}\right\}$ defined by: $\nu_p^\Z \left({n}\right) := \begin{cases} +\infty & : n = 0 \\ \sup \left\{{v \in \N: p^v \mathbin \backslash n}\right\} & : n \ne 0 \end{cases}$ where: $\sup$ denotes supremum $p^v \mathbin \backslash n$ expresses that $p^v$ divides $n$. ### Rational Numbers Let the $p$-adic valuation on the integers $\nu_p^\Z$ be extended to $\nu_p^\Q: \Q \to \Z \cup \left\{{+\infty}\right\}$ by: $\nu_p^\Q \left({\dfrac a b}\right) := \nu_p^\Z \left({a}\right) - \nu_p^\Z \left({b}\right)$ This mapping $\nu_p^\Q$ is called the $p$-adic valuation (on $\Q$) and is usually denoted $\nu_p: \Q \to \Z \cup \left\{{+\infty}\right\}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992407500743866, "perplexity": 712.6087454823762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999141.54/warc/CC-MAIN-20190620044948-20190620070948-00329.warc.gz"}
https://zbmath.org/?q=an:1124.35037
× ## On lars Hörmander’s remark on the characteristic Cauchy problem.(English)Zbl 1124.35037 The author re-considers a result of L. Hörmander [J. Funct. Anal. 93, No. 2, 270–277 (1990; Zbl 0724.35060)], concerning a characteristic Cauchy problem for a class of wave equations on spatially compact space-times, with initial data on hypersurfaces that were weakly spacelike. The metric on the space-time and the first-order perturbation were assumed to be smooth. Here the author extends the result to a Lipschitz metric $$g^{jk}$$, namely for the equation $$\partial^2_t u- g^{jk}\partial_j\partial_k u= 0$$, with Einstein notation. ### MSC: 35L15 Initial value problems for second-order hyperbolic equations 35A05 General existence and uniqueness theorems (PDE) (MSC2000) 35L05 Wave equation ### Keywords: very weak regularity Zbl 0724.35060 Full Text: ### References: [1] Baez, J. C; Segal, I. E.; Zhou, Z. F., The global Goursat problem and scattering for nonlinear wave equations, J. Funct. Anal., 93, 239-269, (1990) · Zbl 0724.35105 [2] Christodoulou, D; Klainerman, S., The global nonlinear stability of the Minkowski space, Princeton Mathematical, 41, x+514 pp., (1993) · Zbl 0827.53055 [3] Chrusciel, P.; Delay, E. E., Existence of non trivial, asymptotically vacuum, asymptotically simple space-times, Class. Quantum Grav., 19, L71-L79, (2002) · Zbl 1005.83009 [4] Chrusciel, P.; Delay, E. E., On mapping properties of the general relativistic constraints operator in weighted function spaces, with applications, 94, (2003), Mémoires de la S.M.F. · Zbl 1058.83007 [5] Corvino, J., Scalar curvature deformation and a gluing construction for the Einstein constraint equations, Comm. Math. Phys., 214, 137-189, (2000) · Zbl 1031.53064 [6] Corvino, J.; Schoen, R. M., On the asymptotics for the vacuum Einstein constraint equations · Zbl 1122.58016 [7] Friedlander, F. G., Radiation fields and hyperbolic scattering theory, Math. Proc. Camb. Phil. Soc., 88, 483-515, (1980) · Zbl 0465.35068 [8] Friedlander, F. G., Notes on the wave equation on asymptotically Euclidean manifolds, J. Functional Anal., 184, 1-18, (2001) · Zbl 0997.58013 [9] Hörmander, L., A remark on the characteristic Cauchy problem, J. Funct. Anal., 93, 270-277, (1990) · Zbl 0724.35060 [10] Klainerman, S.; Nicolò, F., Peeling properties of asymptotically flat solutions to the Einstein vacuum equations, Class. Quantum Grav., 20, 14, 3215-3257, (2003) · Zbl 1045.83016 [11] Mason, L. J.; Nicolas, J.-P., Conformal scattering and the Goursat problem, J. Hyperbolic. Diff. Eq., 1, 2, 197-233, (2004) · Zbl 1074.83019 [12] Penrose, R., Null hypersurface initial data for classical fields of arbitrary spin and for general relativity, in Aerospace Research Laboratories report 63-56 (P.G. Bergmann), Vol. 12, (1963) · Zbl 0452.53014 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8843803405761719, "perplexity": 3456.3376616252417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00241.warc.gz"}
https://www.physicsforums.com/threads/remedial-kinetic-energy-equivalency-question.300303/
# Remedial kinetic energy equivalency question 1. Mar 16, 2009 ### denver75 I'm working on a demonstration of where I want to show the equivalent result of impacts of two separate masses. I'd like to make sure I am understanding these concepts correctly (it's been more than a few years since my college level physics class). An object weighing 100kg moving at a velocity of 20 m/s has a KE of 20,000J To get the equivalent KE from an object that weighs 10kg, I've calculated a velocity of 63.25m/s Now, here's my uncertainty: does this mean that an impact of the 100kg object moving at 20m/w would create the same amount of damage as the 10kg object moving at 63.25m/s? Assuming that the materials are the same, so the impact distance and rebound would be equivalent. Or are there other factors involved? 2. Mar 17, 2009 ### cragar cross sectional area might factor in , they have the same kinetic energy but the smaller cross sectional area would focus that energy to a smaller point. 3. Mar 17, 2009 ### Bob S First, assume that the coefficient of restitution is 1, so that there is no "damage" (energy loss). In this case, both momentum and energy are conserved (the first mass always recoils unless M1 = M2). You have two equations in two unknowns. Second, solve the same problem where the coefficient of restitution is zero. Third, solve the same problem with an arbitrary coefficient of restitution. Beware of using rolling billiard balls, because 2/7 of the total kinetic energy is rotational energy (I = (2/5) m R^2) and is not easily transferred during a collision. 4. Mar 17, 2009 ### timmay Compare these two drop weight impacts: Impact 1 Mass = 1 kg Drop height = 2 m Speed on impact = $$\sqrt {2gh} = 6.26 ms^{-1}$$ Kinetic energy at impact = $$\frac {1}{2} m v^{2} = 19.62 J$$ Impact 2 Mass = 2 kg Drop height = 1 m Speed on impact = $$\sqrt {2gh} = 4.43 ms^{-1}$$ Kinetic energy at impact = $$\frac {1}{2} m v^{2} = 19.62 J$$ Let's assume that all the kinetic energy of the impacting object (KE) is perfectly converted to elastic strain energy (W) by purely compressing a sample of the same material and dimensions: $$W=\int^{\epsilon=\epsilon_{1}}_{\epsilon=0} \sigma d \epsilon = KE$$ If your sample has a compressive stress-strain profile that does not vary with strain rate, the strain epsilon_1 at which this is achieved (and the subsequent stress that this is achieved at) will be the same for both impacts. However, if your sample shows strong rate dependency (for instance most polymers) then the fact that one impact occurs at a greater initial velocity means that the material response will generally be stronger and stiffer. That's something to bear in mind when you talk about two equal-energy impacts with the same contact area - rate sensitivity in materials means you will probably see a difference in 'impact severity'. Also, when you begin to change the contact area between collisions, you will begin to see greater differences in 'impact damage'. The large stresses created by the relatively sharp point of a bullet will create more damage in a structure than a relatively bluff ball bearing of the same mass and impact speed. Similar Discussions: Remedial kinetic energy equivalency question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893658459186554, "perplexity": 1010.4598520642286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00032.warc.gz"}
https://indico.cern.ch/event/1086716/contributions/5053069/
# Higgs 2022 Nov 7 – 11, 2022 Pisa Europe/Rome timezone There is a live webcast for this event. ## Search for rare decays of the Standard Model Higgs boson with the ATLAS detector Nov 10, 2022, 3:55 PM 15m Sala Azzurra (Palazzo della Carovana) ### Sala Azzurra #### Palazzo della Carovana Beyond the Standard Model ### Speaker Aaron White (Harvard University (US)) ### Description The Standard Model predicts several rare Higgs boson decay channels, among which are decays to a Z boson and a photon, H to Zgamma, and to a low-mass lepton pair and a photon H to llgamma, and a pair of muon. The observation of Zgamma decays could open the possibility of studying the CP and coupling properties of the Higgs boson in a complementary way to other analyses. In addition, the search for Higgs decays into a vector quarkonium state and a photon provides access to charm- and bottom-quark couplings alternative to the direct H->bb/cc search. Several results for decays based on pp collision data collected at 13 TeV will be presented. Type of talk Experimental measurements
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768809080123901, "perplexity": 4827.105307802317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00648.warc.gz"}
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=pub&publisherID=822&journalID=482&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Publisher: Springer-Verlag (Total: 2349 journals) Algebra UniversalisJournal Prestige (SJR): 0.583 Citation Impact (citeScore): 1Number of Followers: 2      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1420-8911 - ISSN (Online) 0002-5240 Published by Springer-Verlag  [2349 journals] • A Gelfand duality for compact pospaces • Authors: Laurent De Rudder; Georges Hansoul Abstract: It is well known that the category of compact Hausdorff spaces is dually equivalent to the category of commutative $$C^\star$$ -algebras. More generally, this duality can be seen as a part of a square of dualities and equivalences between compact Hausdorff spaces, $$C^\star$$ -algebras, compact regular frames and de Vries algebras. Three of these equivalences have been extended to equivalences between compact pospaces, stably compact frames and proximity frames, the fourth part of what will be a second square being lacking. We propose the category of bounded Archimedean $$\ell$$ -semi-algebras to complete the second square of equivalences and to extend the category of $$C^\star$$ -algebras. PubDate: 2018-05-17 DOI: 10.1007/s00012-018-0519-7 Issue No: Vol. 79, No. 2 (2018) • A refinement of the equaclosure operator • Authors: J. B. Nation; Joy Nishida Abstract: A stronger version of a known property is shown to hold for the natural equaclosure operator on subquasivariety lattices. PubDate: 2018-05-14 DOI: 10.1007/s00012-018-0518-8 Issue No: Vol. 79, No. 2 (2018) • Congruence meet-semidistributive locally finite varieties and a finite basis theorem • Authors: George F. McNulty; Ross Willard Abstract: We provide several conditions that, among locally finite varieties, characterize congruence meet-semidistributivity and we use these conditions to give a new proof of a finite basis theorem published by Baker, McNulty, and Wang in 2004. This finite basis theorem extends Willard’s Finite Basis Theorem. PubDate: 2018-05-14 DOI: 10.1007/s00012-018-0524-x Issue No: Vol. 79, No. 2 (2018) • Polymorphism clones of homogeneous structures: generating sets, Sierpiński rank, cofinality and the Bergman property • Authors: Christian Pech; Maja Pech Abstract: In this paper, motivated by classical results by Sierpiński, Arnold and Kolmogorov, we derive sufficient conditions for polymorphism clones of homogeneous structures to have a generating set of bounded arity. We use our findings in order to describe a class of homogeneous structures whose polymorphism clones have a finite Sierpiński rank, uncountable cofinality, and the Bergman property. PubDate: 2018-05-14 DOI: 10.1007/s00012-018-0527-7 Issue No: Vol. 79, No. 2 (2018) • A conference report: The 5th Novi Sad Algebraic Conference (in conjunction with AAA94) Abstract: Headlining the Topical Collection dedicated to The 5th Novi Sad Algebraic Conference (NSAC 2017), we provide a brief report of the conference along with some of its history and background. PubDate: 2018-05-09 DOI: 10.1007/s00012-018-0528-6 Issue No: Vol. 79, No. 2 (2018) • Infinitely many reducts of homogeneous structures • Authors: Bertalan Bodor; Peter J. Cameron; Csaba Szabó Abstract: It is shown that the countably infinite dimensional pointed vector space (the vector space equipped with a constant) over a finite field has infinitely many first order definable reducts. This implies that the countable homogeneous Boolean-algebra has infinitely many reducts. PubDate: 2018-05-09 DOI: 10.1007/s00012-018-0526-8 Issue No: Vol. 79, No. 2 (2018) • On classes of structures axiomatizable by universal d-Horn sentences and universal positive disjunctions • Authors: Guillermo Badia; João Marcos Abstract: We provide universal algebraic characterizations (in the sense of not involving any “logical notion”) of some elementary classes of structures whose definitions involve universal d-Horn sentences and universally closed disjunctions of atomic formulas. These include, in particular, the classes of fields, of non-trivial rings, and of directed graphs without loops where every two elements are adjacent. The classical example of this kind of characterization result is the HSP theorem, but there are myriad other examples (e.g., the characterization of elementary classes using isomorphic images, ultraproducts and ultrapowers due to Keisler and Shelah). PubDate: 2018-05-07 DOI: 10.1007/s00012-018-0522-z Issue No: Vol. 79, No. 2 (2018) • Congruence structure of planar semimodular lattices: the General Swing Lemma • Authors: Gábor Czédli; George Grätzer; Harry Lakser Abstract: The Swing Lemma, proved by G. Grätzer in 2015, describes how a congruence spreads from a prime interval to another in a slim (having no $$\mathsf {M}_{3}$$ sublattice), planar, semimodular lattice. We generalize the Swing Lemma to planar semimodular lattices. PubDate: 2018-04-30 DOI: 10.1007/s00012-018-0483-2 Issue No: Vol. 79, No. 2 (2018) • Infinite idempotent quasi-affine algebras need not be hereditarily absorption free • Authors: Joe Cyr Abstract: It is known that for finite algebras, solvable implies hereditarily absorption free. We present an example which shows that this implication does not hold for infinite algebras. This example is also quasi-affine, contradicting an earlier statement that quasi-affine algebras are hereditarily absorption free. PubDate: 2018-04-24 DOI: 10.1007/s00012-018-0521-0 Issue No: Vol. 79, No. 2 (2018) • Maximal essential extensions in the context of frames • Authors: Richard N. Ball; Aleš Pultr Abstract: We show that every frame can be essentially embedded in a Boolean frame, and that this embedding is the maximal essential extension of the frame in the sense that it factors uniquely through any other essential extension. This extension can be realized as the embedding $$L \rightarrow \mathcal {N}(L) \rightarrow \mathcal {B}\mathcal {N}(L)$$ , where $$L \rightarrow \mathcal {N}(L)$$ is the familiar embedding of L into its congruence frame $$\mathcal {N}(L)$$ , and $$\mathcal {N}(L) \rightarrow \mathcal {B}\mathcal {N}(L)$$ is the Booleanization of $$\mathcal {N}(L)$$ . Finally, we show that for subfit frames the extension can also be realized as the embedding $$L \rightarrow {{\mathrm{S}}}_\mathfrak {c}(L)$$ of L into its complete Boolean algebra $${{\mathrm{S}}}_\mathfrak {c}(L)$$ of sublocales which are joins of closed sublocales. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0508-x Issue No: Vol. 79, No. 2 (2018) • Special elements of the lattice of monoid varieties • Authors: Sergey V. Gusev Abstract: We completely classify all neutral and costandard elements in the lattice $$\mathbb {MON}$$ of all monoid varieties. Further, we prove that an arbitrary upper-modular element of $$\mathbb {MON}$$ except the variety of all monoids is either a completely regular or a commutative variety. Finally, we verify that all commutative varieties of monoids are codistributive elements of $$\mathbb {MON}$$ . Thus, the problems of describing codistributive or upper-modular elements of $$\mathbb {MON}$$ are completely reduced to the completely regular case. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0513-0 Issue No: Vol. 79, No. 2 (2018) • An error in a proof in Boolean Algebras with Operators, Part I • Authors: Richard L. Kramer; Roger D. Maddux Abstract: An error in a proof of a correct theorem in the classic paper, Boolean Algebras with Operators, Part I, by Jónsson and Tarski is discussed. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0520-1 Issue No: Vol. 79, No. 2 (2018) • A note on archimedean ordered semigroups • Authors: Niovi Kehayopulu Abstract: We characterize the ordered semigroups that are archimedean and contain an intra-regular element, showing that they are exactly nil extensions of simple ordered semigroups, also the ordered semigroups that are both archimedean and $$\pi$$ -semisimple or both archimedean and $$\pi$$ -quasi-semisimple. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0502-3 Issue No: Vol. 79, No. 2 (2018) • Axiomatizations of universal classes through infinitary logic • Authors: Michał M. Stronkowski Abstract: We present a scheme for providing axiomatizations of universal classes. We use infinitary sentences there. New proofs of Birkhoff’s $$\mathsf {HSP}$$ -theorem and Mal’cev’s $$\mathsf {SPP_U}$$ -theorem are derived. In total, we present 75 facts of this sort. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0507-y Issue No: Vol. 79, No. 2 (2018) • Quasiorder lattices of varieties • Authors: Gergő Gyenizse; Miklós Maróti Abstract: The set $${{\mathrm{Quo}}}(\mathbf {A})$$ of compatible quasiorders (reflexive and transitive relations) of an algebra $$\mathbf {A}$$ forms a lattice under inclusion, and the lattice $${{\mathrm{Con}}}(\mathbf {A})$$ of congruences of $$\mathbf {A}$$ is a sublattice of $${{\mathrm{Quo}}}(\mathbf {A})$$ . We study how the shape of congruence lattices of algebras in a variety determine the shape of quasiorder lattices in the variety. In particular, we prove that a locally finite variety is congruence distributive [modular] if and only if it is quasiorder distributive [modular]. We show that the same property does not hold for meet semi-distributivity. From tame congruence theory we know that locally finite congruence meet semi-distributive varieties are characterized by having no sublattice of congruence lattices isomorphic to the lattice $$\mathbf {M}_3$$ . We prove that the same holds for quasiorder lattices of finite algebras in arbitrary congruence meet semi-distributive varieties, but does not hold for quasiorder lattices of infinite algebras even in the variety of semilattices. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0512-1 Issue No: Vol. 79, No. 2 (2018) • Homomorphisms from $$C(X,\mathbb {Z})$$ C ( X , Z ) into a ring of continuous functions • Authors: Ali Reza Olfati Abstract: Let X be a zero-dimensional space and Y be a Tychonoff space. We show that every non-zero ring homomorphism $$\Phi :C(X,\mathbb {Z})\rightarrow C(Y)$$ can be induced by a continuous function $$\pi :Y\rightarrow \upsilon _0X.$$ Using this, it turns out that the kernel of such homomorphisms is equal to the intersection of some family of minimal prime ideals in $${{\mathrm{MinMax}}}\left( C(X,\mathbb {Z})\right) .$$ As a consequence, we are able to obtain the fact that the factor ring $$\frac{C(X,\mathbb {Z})}{C_F(X,\mathbb {Z})}$$ is a subring of some ring of continuous functions if and only if each infinite subset of isolated points of X has a limit point in $$\upsilon _0X.$$ This implies that for an arbitrary infinite set X,  the factor ring $$\frac{\prod _{_{x\in X}}\mathbb {Z}_{_{x}}}{\oplus _{_{x\in X}}\mathbb {Z}_{_{x}}}$$ is not embedded in any ring of continuous functions. The classical ring of quotients of the factor ring $$\frac{C(X,\mathbb {Z})}{C_F(X,\mathbb {Z})}$$ is fully characterized. Finally, it is shown that the factor ring $$\frac{C(X,\mathbb {Z})}{C_F(X,\mathbb {Z})}$$ is an I-ring if and only if each infinite subset of isolated points on X has a limit point in $$\upsilon _0X$$ and $$\upsilon _0X{\setminus }\mathbb {I}(X)$$ is an extremally disconnected $$C_{\mathbb {Z}}$$ -subspace of $$\upsilon _0X,$$ where $$\mathbb {I}(X)$$ is the set of all isolated points of X. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0509-9 Issue No: Vol. 79, No. 2 (2018) • The convolution algebra • Authors: John Harding; Carol Walker; Elbert Walker Abstract: For L a complete lattice L and $$\mathfrak {X}=(X,(R_i)_I)$$ a relational structure, we introduce the convolution algebra $$L^{\mathfrak {X}}$$ . This algebra consists of the lattice $$L^X$$ equipped with an additional $$n_i$$ -ary operation $$f_i$$ for each $$n_i+1$$ -ary relation $$R_i$$ of $$\mathfrak {X}$$ . For $$\alpha _1,\ldots ,\alpha _{n_i}\in L^X$$ and $$x\in X$$ we set $$f_i(\alpha _1,\ldots ,\alpha _{n_i})(x)=\bigvee \{\alpha _1(x_1)\wedge \cdots \wedge \alpha _{n_i}(x_{n_i}):(x_1,\ldots ,x_{n_i},x)\in R_i\}$$ . For the 2-element lattice 2, $$2^\mathfrak {X}$$ is the reduct of the familiar complex algebra $$\mathfrak {X}^+$$ obtained by removing Boolean complementation from the signature. It is shown that this construction is bifunctorial and behaves well with respect to one-one and onto maps and with respect to products. When L is the reduct of a complete Heyting algebra, the operations of $$L^\mathfrak {X}$$ are completely additive in each coordinate and $$L^\mathfrak {X}$$ is in the variety generated by $$2^\mathfrak {X}$$ . Extensions to the construction are made to allow for completely multiplicative operations defined through meets instead of joins, as well as modifications to allow for convolutions of relational structures with partial orderings. Several examples are given. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0510-3 Issue No: Vol. 79, No. 2 (2018) • Axiomatisability and hardness for universal Horn classes of hypergraphs • Authors: Lucy Ham; Marcel Jackson Abstract: We consider hypergraphs as symmetric relational structures. In this setting, we characterise finite axiomatisability for finitely generated universal Horn classes of loop-free hypergraphs. An Ehrenfeucht–Fraïssé game argument is employed to show that the results continue to hold when restricted to first order definability amongst finite structures. We are also able to show that every interval in the homomorphism order on hypergraphs contains a continuum of universal Horn classes and conclude the article by characterising the intractability of deciding membership in universal Horn classes generated by finite loop-free hypergraphs. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0515-y Issue No: Vol. 79, No. 2 (2018) • Polymorphism clones of homogeneous structures: gate coverings and automatic homeomorphicity • Authors: Christian Pech; Maja Pech Abstract: Every clone of functions comes naturally equipped with a topology, the topology of pointwise convergence. A clone $$\mathfrak {C}$$ is said to have automatic homeomorphicity with respect to a class $$\mathcal {K}$$ of clones, if every clone isomorphism of $$\mathfrak {C}$$ to a member of $$\mathcal {K}$$ is already a homeomorphism (with respect to the topology of pointwise convergence). In this paper we study automatic homeomorphicity properties for polymorphism clones of countable homogeneous relational structures. Besides two generic criteria for the automatic homeomorphicity of the polymorphism clones of homogeneous structures we show that the polymorphism clone of the generic poset with strict ordering has automatic homeomorphicity with respect to the class of polymorphism clones of countable $$\omega$$ -categorical structures. Our results extend and generalize previous results by Bodirsky, Pinsker, and Pongrácz. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0504-1 Issue No: Vol. 79, No. 2 (2018) • Unique inclusions of maximal C-clones in maximal clones • Authors: Mike Behrisch; Edith Vargas-García Abstract: C-clones are polymorphism sets of so-called clausal relations, a special type of relations on a finite domain, which first appeared in connection with constraint satisfaction problems in work by Creignou et al. from 2008. We completely describe the relationship regarding set inclusion between maximal C-clones and maximal clones. As a main result we obtain that for every maximal C-clone there exists exactly one maximal clone in which it is contained. A precise description of this unique maximal clone, as well as a corresponding completeness criterion for C-clones is given. PubDate: 2018-04-20 DOI: 10.1007/s00012-018-0497-9 Issue No: Vol. 79, No. 2 (2018) JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 Home (Search) Subjects A-Z Publishers A-Z Customise APIs
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310441374778748, "perplexity": 1330.3748079158515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743011.30/warc/CC-MAIN-20181116111645-20181116133645-00479.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/20988
Files in this item FilesDescriptionFormat application/pdf 9124502.pdf (6MB) (no description provided)PDF Description Title: Theory and applications of scattering and inverse scattering problems Author(s): Wang, Yi-Ming Doctoral Committee Chair(s): Chew, Weng Cho Department / Program: Electrical and Computer Engineering Discipline: Electrical Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Engineering, Electronics and Electrical Abstract: Two algorithms based on the recursive operator algorithm are proposed to solve for the scattered field from an arbitrarily shaped, inhomogeneous scatterer. By discretizing the object into N subobjects, the scattering solution of an arbitrarily shaped inhomogeneous scatterer can be formulated as a scattering solution of an N-scatterer problem, each of whose scattered fields is approximated by M harmonics. Using the translation formulas, a recursive approach is developed which enables us to derive an n + 1-scatterer solution from an n-scatterer solution. Therefore, knowing the isolated transition matrices for all subscatterers, the total transition matrices for an N-scatterer problem can be obtained recursively. The computation time of such an algorithm is proportional to $N\sp2M\sp2P$, where P is the number of harmonics used in the translation formulas. Furthermore, by introducing an aggregate transition matrix to the recursive scheme, a fast algorithm, whose computational complexity is linear in N, is developed. The algorithm has been used to solve for the scattering solution of a 10$\lambda$ diameter, two-dimensional dielectric scatterer with about 12,000 unknowns, taking 32 sec on a CRAY-2 supercomputer.In order to solve the electromagnetic inverse scattering problem beyond the Born approximation, two iterative algorithms are developed. They are the Born iterative method and the distorted Born iterative method. Numerical simulations are performed in several cases in which the conditions for the Born approximation are not satisfied. The results show that in both low and high frequency cases, good reconstructions of the permittivity distribution are obtained. Meanwhile, the simulations reveal that each method has its advantages. The distorted Born iterative method shows a faster convergence rate compared to that for the Born iterative method, while the Born iterative method is more robust to noise contamination compared to that for the distorted Born iterative method.A boosting procedure which helps to retrieve the maximum amount of information content is proposed to solve the limited angle inverse scattering problem. Using the boosting procedure in the limited angle inverse scattering problem, good reconstructions are achieved for both well-to-well tomography and subsurface detection.By applying the fast recursive algorithm to the solution of the direct scattering part of the iterative schemes and the conjugate gradient method to the solution of the inversion part of the iterative schemes, the computational complexity of the Born iterative method and the distorted Born iterative method is further reduced from $N\sp3$ to $N\sp2$. Issue Date: 1991 Type: Text Language: English URI: http://hdl.handle.net/2142/20988 Rights Information: Copyright 1991 Wang, Yi-Ming Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9124502 OCLC Identifier: (UMI)AAI9124502 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511691093444824, "perplexity": 1043.0195366891917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00298-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/259382/need-help-lim-n-to-infty-i3n
# need help - $\lim_{n \to \infty} i^{3n}$ it seems first easy to me, but now i am tossing my head against wall not being able to solve the problem. i need to check for convergence of this sequence below. i dont know how to start although it seems to be very easy one $\lim_{n \to \infty} i^{3n} = help = ?$ i need help here. do i have to work with $exp$ here? i seem to have enough material in my brain and cannot use them on time. tragedy! - Write $$a_n=i^{3n}=(i^{3})^{n}=(-i)^{n}=(-1)^ni^n$$ The last limit doesn't exist! Take $k_n=4n$ and $m_n=4n+2$. Then $(a_{k_n})$ and $(a_{m_n})$ are both subsequences of $(a_n)$ but $$a_{k_n}=(-1)^{4n}i^{4n}=1\cdot 1=1\to 1$$ while $$a_{m_n}=(-1)^{4n+2}i^{4n+2}=1\cdot (-1)=-1\to -1$$ as $n\to +\infty$ - thanks Nameless, but i dont get yet, $k_{n}$ as an example for what? for a subsequence of $(-1)^ni^n$? –  doniyor Dec 15 '12 at 17:08 @doniyor Yes. Let me add some more detail, –  Nameless Dec 15 '12 at 17:09 thanks a lot! nice –  doniyor Dec 15 '12 at 17:31 If $i=\sqrt{-1}$ then the sequence $(i^{3n})_1^{\infty}$ is divergent. - how can i show that divergence ? –  doniyor Dec 15 '12 at 16:58 @doniyor: Another answer makes mine complete. Thanks Nameless. –  Babak S. Dec 15 '12 at 17:00 Note that $i^3 = -i$. Hence, $i^{3n} = (-i)^n$. $$i^{3n} = (-i)^n = \begin{cases} 1 & n \equiv 0 \pmod{4}\\ -i & n \equiv 1 \pmod{4}\\ -1 & n \equiv 2 \pmod{4}\\ i & n \equiv 3 \pmod{4} \end{cases}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188175201416016, "perplexity": 534.7180824873984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645328641.76/warc/CC-MAIN-20150827031528-00275-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.kea-monad.blogspot.com/2010/04/m-theory-lesson-310.html
occasional meanderings in physics' brave new world Name: Location: New Zealand Marni D. Sheppeard ## Saturday, April 03, 2010 ### M Theory Lesson 310 Permutations may be represented by symmetric braids. A category theorist would usually draw a diagram like indicating that the braid crossing goes neither over nor under, and that one can slide strands across each other. It would be nicer if we could draw a braid like because this obviously untangles to the identity braid. However, the two crossings here are different, like in the law $(\sigma) \circ (- \sigma) = 1$, which is to say that $\sigma$ is a bit like the complex $i$, satisfying $i + i^{-1} = 0$. How about elements of $S_3$? The permutation $(231)$ cubes to the identity, as in the diagram where the crossings have been chosen so that no strand actually links with any other. Note that each $(231)$ section is a Bilson-Thompson braid. What fun! By assigning $n$-th roots of unity to each strand one obtains a fun operator, for $S_d$. The choice $n = d+1$ (see last time) gives a representation of $S_n$. Kea said... Recall that the 3d MUB Pauli operators involve cubed roots. We could think of such root labels on three strands as possible half twists in each ribbon strand, and then the Bilson-T charge pieces would be represented by these cubed roots. A typical MUB operator would have a charge set +-0. April 03, 2010 1:19 PM CarlBrannen said... I'm starting to think I understand how to get GR from Pauli MUBs. Basically, if you consider one basis you get zitterbewegung and therefore the Dirac equation. This is a 1-dimensional theory; a particle having positive velocity spends more time going right (+c) than left (-c). The time spent going R and L sums up to 1: R+L = 1, while v = R-L. When you go to three MUBs the situation complicates. Now, instead of balancing between +x and -x, you balance between +x,+y,+z,-x,-y,-z. Thus it is possible for any single dimension to be offset. This fits in perfectly with the Gullstrand-Painleve coordinates for the black hole. April 03, 2010 2:02 PM Kea said... Oops, I should have mentioned those 6x6 matrices made from putting 2d Pauli ops into a 3x3 circulant ... which is your way of getting the particles ... Hmm, so you're heading towards a 'Three Time' point of view now? This sounds hopeful ... April 03, 2010 2:24 PM
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033757448196411, "perplexity": 1317.7985255341378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661675.84/warc/CC-MAIN-20150417045741-00050-ip-10-235-10-82.ec2.internal.warc.gz"}
http://genomicsclass.github.io/book/pages/matrix_algebra_examples.html
## Examples Now we are ready to see how matrix algebra can be useful when analyzing data. We start with some simple examples and eventually arrive at the main one: how to write linear models with matrix algebra notation and solve the least squares problem. #### The average To compute the sample average and variance of our data, we use these formulas $\bar{Y}=\frac{1}{N} Y_i$ and $\mbox{var}(Y)=\frac{1}{N} \sum_{i=1}^N (Y_i - \bar{Y})^2$. We can represent these with matrix multiplication. First, define this $N \times 1$ matrix made just of 1s: This implies that: Note that we are multiplying by the scalar $1/N$. In R, we multiply matrix using %*%: library(UsingR) y <- father.son$sheight print(mean(y)) ## [1] 68.68407 N <- length(y) Y<- matrix(y,N,1) A <- matrix(1,N,1) barY=t(A)%*%Y / N print(barY) ## [,1] ## [1,] 68.68407 #### The variance As we will see later, multiplying the transpose of a matrix with another is very common in statistics. In fact, it is so common that there is a function in R: barY=crossprod(A,Y) / N print(barY) ## [,1] ## [1,] 68.68407 For the variance, we note that if: In R, if you only send one matrix into crossprod, it computes: $r^\top r$ so we can simply type: r <- y - barY crossprod(r)/N ## [,1] ## [1,] 7.915196 Which is almost equivalent to: library(rafalib) popvar(y) ## [1] 7.915196 #### Linear models Now we are ready to put all this to use. Let’s start with Galton’s example. If we define these matrices: Then we can write the model: as: or simply: which is a much simpler way to write it. The least squares equation becomes simpler as well since it is the following cross-product: So now we are ready to determine which values of $\beta$ minimize the above, which we can do using calculus to find the minimum. #### Advanced: Finding the minimum using calculus There are a series of rules that permit us to compute partial derivative equations in matrix notation. By equating the derivative to 0 and solving for the $\beta$, we will have our solution. The only one we need here tells us that the derivative of the above equation is: and we have our solution. We usually put a hat on the $\beta$ that solves this, $\hat{\beta}$ , as it is an estimate of the “real” $\beta$ that generated the data. Remember that the least squares are like a square (multiply something by itself) and that this formula is similar to the derivative of $f(x)^2$ being $2f(x)f\prime (x)$. #### Finding LSE in R Let’s see how it works in R: library(UsingR) x=father.son$fheight y=father.son\$sheight X <- cbind(1,x) betahat <- solve( t(X) %*% X ) %*% t(X) %*% y ###or betahat <- solve( crossprod(X) ) %*% crossprod( X, y ) Now we can see the results of this by computing the estimated $\hat{\beta}_0+\hat{\beta}_1 x$ for any value of $x$: newx <- seq(min(x),max(x),len=100) X <- cbind(1,newx) fitted <- X%*%betahat plot(x,y,xlab="Father's height",ylab="Son's height") lines(newx,fitted,col=2) This $\hat{\boldsymbol{\beta}}=(\mathbf{X}^\top \mathbf{X})^{-1} \mathbf{X}^\top \mathbf{Y}$ is one of the most widely used results in data analysis. One of the advantages of this approach is that we can use it in many different situations. For example, in our falling object problem: set.seed(1) g <- 9.8 #meters per second n <- 25 tt <- seq(0,3.4,len=n) #time in secs, t is a base function d <- 56.67 - 0.5*g*tt^2 + rnorm(n,sd=1) Notice that we are using almost the same exact code: X <- cbind(1,tt,tt^2) y <- d betahat <- solve(crossprod(X))%*%crossprod(X,y) newtt <- seq(min(tt),max(tt),len=100) X <- cbind(1,newtt,newtt^2) fitted <- X%*%betahat plot(tt,y,xlab="Time",ylab="Height") lines(newtt,fitted,col=2) And the resulting estimates are what we expect: betahat ## [,1] ## 56.5317368 ## tt 0.5013565 ## -5.0386455 The Tower of Pisa is about 56 meters high. Since we are just dropping the object there is no initial velocity, and half the constant of gravity is 9.8/2=4.9 meters per second squared. #### The lm Function X <- cbind(tt,tt^2) fit=lm(y~X) summary(fit) ## ## Call: ## lm(formula = y ~ X) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.5295 -0.4882 0.2537 0.6560 1.5455 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 56.5317 0.5451 103.701 <2e-16 *** ## Xtt 0.5014 0.7426 0.675 0.507 ## X -5.0386 0.2110 -23.884 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9822 on 22 degrees of freedom ## Multiple R-squared: 0.9973, Adjusted R-squared: 0.997 ## F-statistic: 4025 on 2 and 22 DF, p-value: < 2.2e-16 Note that we obtain the same values as above. #### Summary We have shown how to write linear models using linear algebra. We are going to do this for several examples, many of which are related to designed experiments. We also demonstrated how to obtain least squares estimates. Nevertheless, it is important to remember that because $y$ is a random variable, these estimates are random as well. In a later section, we will learn how to compute standard error for these estimates and use this to perform inference.
{"extraction_info": {"found_math": true, "script_math_tex": 16, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069244265556335, "perplexity": 1502.5627285368885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00540.warc.gz"}
http://www.math.gatech.edu/node/15122
Euler equation with fixed or free boundaries - from a Lagrangian point of view Series: School of Mathematics Colloquium Thursday, November 6, 2008 - 11:00 1 hour (actually 50 minutes) Location: Skiles 269 , School of Mathematics, Georgia Tech In this talk, we discuss 1.) the nonlinear instability and unstable manifolds of steady solutions of the Euler equation with fixed domains and 2.) the evolution of free (inviscid) fluid surfaces, which may involve vorticity, gravity, surface tension, or magnetic fields. These problems can be formulated in a Lagrangian formulation on infinite dimensional manifolds of volume preserving diffeomorphisms with an invariant Lie group action. In this setting, the physical pressure turns out to come from the combination of the gravity, surface tension, and the Lagrangian multiplier. The vorticity is naturally related to an invariant group action. In the absence of surface tension, the well-known Rayleigh-Taylor and Kelvin-Helmholtz instabilities appear naturally related to the signs of the curvatures of those infinite dimensional manifolds. Based on these considerations, we obtain 1.) the existence of unstable manifolds and L^2 nonlinear instability in the cases of the fixed domains and 2.) in the free boundary cases, the local well-posedness with surface tension in a rather uniform energy method. In particular, for the cases without surface tension which do not involve hydrodynamical instabilities, we obtain the local existence of solutions by taking the vanishing surface tension limit.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970093846321106, "perplexity": 407.88728433351764}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886436.25/warc/CC-MAIN-20180116125134-20180116145134-00569.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=Collatz_Problem&diff=prev&oldid=73438
# Difference between revisions of "Collatz Problem" Define the following function on : The Collatz conjecture says that, for any positive integer , the sequence contains 1. This conjecture is still open. Some people have described it as the easiest unsolved problem in mathematics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604359865188599, "perplexity": 325.9712782326377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00394.warc.gz"}
https://physics.stackexchange.com/questions/93772/how-to-measure-missing-transverse-energy
# How to measure (missing) transverse energy There is traditionally a bit of confusion between missing transverse energy, and missing transverse momentum. I've seen both used interchangably, and sometimes even things like "$\not E_T = -|\sum \vec p_T|$". Just to clarify before my question, if both quantities are used, then missing transverse momentum usually refers to the sum of all tracks' or reconstructed objects' $p_T$: $$\not \vec p_T = - \!\!\!\sum_{i \;\in\; \mathrm{tracks}}\!\!\! \vec p_T(i)$$ and missing transverse energy refers to something calculated using mainly the calorimeter, right? So, how do you calculate (missing) transverse energy from calorimeter readings? Especially, how do you do a vectorial sum, even though calorimeter energies are scalar values? You don't have particle momentum vectors or similar, but only the readings of calorimeter cells. (Reconstructed objects come into play later, for corrections, but are not involved at first order.) Naively, I'd say you construct a vector from the position of each calorimeter cell, and give it a length proportional to its energy: $$\qquad\quad \vec E_\mathrm{cell} = \frac{\vec x_\mathrm{cell}}{|\vec x_\mathrm{cell}|} E_\mathrm{cell} \qquad (?)$$ and then just sum those vectors up. But you want to have transverse energy. How do you do that? Just by using 2-d vectors (only $x$ and $y$ coordinates)? And finally, is the geometry of the cells a factor in the calculation? (I'm looking for a general answer, but where it is experiment-specific, I'm interested in ATLAS and the way it was done at DZero. I can imagine the answer would be very different for CMS with their special calorimeter. And sorry, I tried to read the design documents, but I couldn't really understand how it is calculated. I'd be happy if someone would point me to some clear documentation though. There are a couple of similar questions, but they don't hit the point I'm wondering about. This question basically asks "why use transverse quantities", and the answer conflates transverse energy and momentum (I'm interested in the one determined by the calorimeter where you don't have $\vec p_T$ vectors!). And this question comes from the other side and asks how to calculate $\not E_T$ from four-vectors, not from experimental readings.) • This would need a long answer. Here is a CMS paper with all the definitions in use: arxiv.org/pdf/1106.5048v1.pdf . The direction of the energy deposits is used by the way. – anna v Jan 15 '14 at 14:30 • "How do you do that? Just by using 2-d vectors (only x and y coordinates)?" Yes. – Matt Reece Jan 15 '14 at 16:49 • And the reason you use calorimeter deposits instead of just tracks is that you don't want to omit things like photons and $\pi^0$'s. – Matt Reece Jan 15 '14 at 16:49 In high energy physics momentum and energy are sloppily conflated because they approximate each other extremely well in the relativistic limit (E = $\sqrt{m^2c^4+p^2c^2}$ $\approx$ pc, or E = p in natural units ). If you had top quarks decaying in the calorimeter then the approximation would be invalid, but you don't: massive particles decay quickly into relativistic decay products (whose momenta are roughly equal to their energy) before they reach the calorimeter. The transverse energy is calculated from the calorimeter cells by exploiting the fact that the calorimeter cells are by design organized into towers that point roughly towards the collision point. Thus a transverse component can be defined. It would be difficult to argue that it is a "precision measurement." More generally the conversion of calorimeter energy deposits into a measured energy and direction associated with an originating parton is a messy business, involving clustering algorithms (which energy deposit is associated with which jet?) and the jet energy scale (how does the jet's true energy and direction relate to the measured signals in the calorimeter?).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168339371681213, "perplexity": 443.42709460627145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00126.warc.gz"}
https://math.stackexchange.com/questions/1725821/structure-of-gal-mathbbq-zeta-15-mathbbq/1725844
# Structure of $Gal(\mathbb{Q}(\zeta_{15})/\mathbb{Q})$? $\zeta_{15}$ is $15$th primitive $n$th root of unity. Question: Find the structure of the group $Gal(\mathbb{Q}(\zeta_{15})/\mathbb{Q})$ I know that if $p$ is prime then $G=Gal(\mathbb{Q}(\zeta_{p})/\mathbb{Q})=\mathbb{Z_{p}}^*$ but when $p$ is not prime, I am not show how to solve this if not prime. Does my required group have any relation to some cyclic group $\mathbb{Z_n}$, i.e. abelian Would really appreciate your guidance as I have not got my head around these concepts of cyclotomic fields and Galois structure. Thanks • It's the same thing as for primes. The Galois group is $\Bbb{Z}_{15}^*$ – Crostul Apr 3 '16 at 11:11 • Why is this the case? I would like to be able to construct some kind of proof – thinker Apr 3 '16 at 11:11 • en.wikipedia.org/wiki/Cyclotomic_field – Crostul Apr 3 '16 at 11:12 • I have seen it, and it does not give an explanation in sufficient detail – thinker Apr 3 '16 at 11:13 • Google "cyclotomic fields". You will find a lot of proofs for this fact, and other things – Crostul Apr 3 '16 at 11:15 Let $G$ be the Galois group of $\Bbb Q(\zeta_n)$ over $\Bbb Q$. An element $f \in G$ is entirely determined by its image on $\zeta_n$. Since $f(\zeta_n)^k=f(\zeta_n^k)=1 \iff \zeta_n^k=1$ (recall that $f$ is a field automorphism), you know that $f(\zeta_n)$ is a primitive $n$-th root of unity: $$f(\zeta_n)=\zeta_n^{k_f}$$ for some integer $1≤k_f≤n$ coprime with $n$. Therefore, you have a well-defined map $$\alpha : G \to (\Bbb Z/n\Bbb Z)^* \qquad \alpha(f)=[k_f]_n$$ You can check that this is a group isomorphism. It is clearly injective. Since $|G|=[\Bbb Q(\zeta_n):\Bbb Q]=\text{deg}(\Phi_n)=\phi(n) = |(\Bbb Z/n\Bbb Z)^*|$, $\alpha$ is bijective. Finally, $(f \circ g)(\zeta_n)=f(g(\zeta_n)) = f(\zeta_n^{k_g})=\zeta_n^{k_g \, k_f}$ shows that $$\alpha(f \circ g) = [k_{f \circ g}]_n = [k_f]_n[k_g]_n=\alpha(f)\alpha(g).$$ More generally, let $K$ be a field of characteristic coprime with $n$ (e.g. if $\text{car}(K)=0$), and suppose that $R_n = \{x \in \overline K \mid x^n=1\}$ denotes the set of $n$-th roots of unity in an algebraic closure $\overline K$ of $K$ (notice that $R_n$ is always a cyclic group for the multiplication, since it is a finite subgroup of $(K^*,\cdot)$). Then the extension $K(R_n)$ over $K$ is Galois (it is separable because $n$ is coprime with the characteristic of $K$) and you can show similarly that the Galois group of $K(R_n)$ over $K$ embeds in $(\Bbb Z/n\Bbb Z)^*$. • so in this case $\zeta_{n}$ acts as a cyclic generator? – thinker Apr 3 '16 at 14:10 • Yes, absolutely: the $n$-th roots of $1$ (i.e. $x$ such that $x^n=1$) are of the form $x=\zeta_n^j$, i.e. the set $R_n$ of the $n$-th roots of $1$ is actually the subgroup generated by $\zeta_n$. Indeed, $R_n$ is a finite subgroup (for the multiplication) of $\overline{ \Bbb Q}^*$ (where $\overline{\Bbb Q}$ is an algebraic closure of $\Bbb Q$. You may know that every finite subgroup of $(K^*,\cdot)$ (where $K$ is any field) is actually cyclic. – Watson Apr 3 '16 at 14:13 • Then, $\Bbb Q(\zeta_n)$ over $\Bbb Q$ is Galois because it is separable (any extension of a field of characteristic $0$, as $\Bbb Q$, is separable) and normal (it is the splitting field of $X^n-1$ : any root of $X^n-1$ belongs to $R_n = \langle \zeta_n \rangle$, and therefore to $\Bbb Q(\zeta_n)$). – Watson Apr 3 '16 at 14:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510094523429871, "perplexity": 122.83265588712116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00068.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/VLADIMIR_A_SUKRATOV
Articles written in Pramana – Journal of Physics • Amorphisation of boron carbide under gamma irradiation Boron carbide (B$_4$C) has been widely used in nuclear reactors and nuclear applications. In this work, the high-purity (99.9%) B$_4$C samples were irradiated using a gamma source ($^{60}$Co) with a dose rate ($D$) of 0.27 Gy/s at different gamma irradiation doses at room temperature. Phase and microstructural characterisation of B$_4$C samples were carried out using X-ray diffraction (XRD) and scanning electron microscopy (SEM). XRD results displayed some degradation of the diffraction peaks. The calculations reveal that 62% of B$_4$C has changed into the amorphous phase when the irradiation dose is 194.4 kGy. Fourier transform infrared spectroscopy (FTIR) was used to explain chemical bonds and functional groups of B$_4$C samples before and after gamma irradiation. The results showed that C–C chemical bonds are weaker than B–C chemical bonds and tend to break under gamma irradiation. Element mapping analysis for each gamma irradiation dose of B$_4$C samples was performed using SEM patterns. The dynamics of the elements on the surface and chemical formula of all B$_4$C samples were also determined after gamma irradiation. • Pramana – Journal of Physics Volume 94, 2020 All articles Continuous Article Publishing mode • Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234065413475037, "perplexity": 3547.0321923363226}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00067.warc.gz"}
https://economics.stackexchange.com/questions/29222/mixted-strategy-that-assigns-positive-weight-to-a-pure-startegy-that-is-dominate/29226
mixted strategy that assigns positive weight to a pure startegy that is dominated This problem is of Fernando Vega Redondo(Economics and the theory of games) Exercise 2.1 Let G be a game in strategic form. Prove that, for every player $$i\in N$$, every mixted strategy $$\sigma_{i}\in \Sigma_{i}$$ that assigns positive weight to a pure strategy $$s_{i}\in S_{i}$$ that is dominated can be itself always be improved by another strategy $$\sigma_{i}'$$. This is if $$s_{i}\in S_{i}$$ is strongly dominated for some $$\sigma_{i}\in \Sigma$$ with $$\sigma_{i}(s_{i})>0$$, $$\exists$$ $$\sigma_{i}'\in \Sigma_{i}$$ such that $$\foralls_{-i}\in S_{-i}: \pi_{i}(\sigma_{i}',s_{-i})>\pi_{i}(\sigma_{i},s_{-i})$$. Q1: why they said that affirmation is obvious? *I would like to know if anyone could tell me how to build that mixed strategy that dominates that strategy that assigns positive probability, since it is not obvious to me. Thanks! The short answer is that a mixed strategy $$\sigma_i$$ that uses with positive probability a dominated pure strategy $$s_i$$ can always be improved by excluding $$s_i$$ from its mixing support and redistributing its "probability weight" to one of its dominating strategies. More formally, suppose that strategy $$s_i'$$ dominates $$s_i$$ (you can think of $$s_i'$$ as a pure or mixed strategy itself, this does not make any difference for the argument; for simplicity I suppose $$s_i'$$ is a pure strategy), that is $$\forall s_{-i}\in S_{-i}: \pi(s_i',s_{-i})>\pi(s_i,s_{-i})$$. Also, suppose that strategy $$\sigma_i$$ plays strategy $$s_i$$ with positive probability $$p_{s_i}\in(0,1)$$. Build strategy $$\sigma_i'$$ as follows: identical to $$\sigma_i$$ but replace $$s_i$$ with $$s_i'$$ , that is play $$s_i$$ with probability 0 and play $$s_i'$$ with (additional) probability $$p_{s_i}$$. By linearity of the payoff function you have, $$\forall s_{-i}\in S_{-i}$$: $$\pi(\sigma_i',s_{-i})=p_{s_i}\pi(s_i',s_{-i}) + (1-p_{s_i})\pi(\Lambda_i,s_{-i}) > p_{s_i}\pi(s_i,s_{-i}) + (1-p_{s_i})\pi(\Lambda_i,s_{-i}) = \pi(\sigma_i,s_{-i})$$ where $$\Lambda_i$$, loosely speaking, denotes the residual combination of strategies played in $$\sigma_i$$, apart from $$s_i$$, inherited by $$\sigma_i'$$ too.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081516861915588, "perplexity": 377.8048466025065}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00020.warc.gz"}
https://cna.ca/issues-policy/environment/emissions-research/
Emissions research - Canadian Nuclear Association # Emissions research When the greenhouse-gas emissions resulting from the whole life-cycle of various power-generation methods is taken into account, nuclear power compares much more closely to renewable sources than to fossil fuels. Several international studies delve further into the topic: • The Intergovernmental Panel on Climate Change, which is the United Nations’ multinational expert panel, produced its 2011 Special Report on Renewable Energy Sources and Climate Change Mitigation; the Summary for Policymakers provides a general overview of global knowledge on the subject. • The Nuclear Energy Institute, which is the Canadian Nuclear Agency’s equivalent in the United States, provides a general discussion on its Life-Cycle Emissions Analyses page. • The University of Wisconsin has published a variety of life-cycle studies that focus on the “net energy payback” from various nuclear technologies in comparison with wind and coal. While reviewing these and other studies, it is important to bear in mind that their results can vary: they employ different methodologies, and the time periods and geographical contexts where the power generation occurs can present an enormous number of variables. So too can the energy technologies being studied (the type of nuclear reactor, conventional natural gas versus shale gas, or whether decommissioned windmill towers are recycled, incinerated or landfilled).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219649791717529, "perplexity": 2969.556773114321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203462.50/warc/CC-MAIN-20190324145706-20190324171706-00500.warc.gz"}
https://www.talkstats.com/threads/generalised-least-squares-from-regression-coefficients-to-correlation-coefficients.44591/
# Generalised least squares: from regression coefficients to correlation coefficients? #### sqrtsqrt ##### New Member Hi All, I asked this on Stack Exchange, but it seems no-one there knows the answer. I wonder if anyone on talkstats can shed some light on it. For least squares with one predictor: $$y = \beta x + \epsilon$$ If $$x$$ and $$y$$ are standardised prior to fitting (i.e. $$\sim N(0,1)$$), then: - $$\beta$$ is the same as the Pearson correlation coefficient, $$r$$. - $$\beta$$ is the same in the reflected regression: $$x = \beta y + \epsilon$$ For generalised least squares (GLS), does the same apply? I.e. if I standardise my data, can I obtained correlation coefficients directly from the regression coefficients? From experimenting with data, the reflected GLS leads to different $$\beta$$ coefficients and also I'm not sure that I'm believing that the regression coefficients fit with my expected values for correlation. I know people quote GLS correlation coefficients, so I am wondering how they arrive at them and hence what they really mean. Thanks for considering this
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8143745064735413, "perplexity": 964.3214288187476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00078.warc.gz"}
http://www.cliffsnotes.com/math/algebra/algebra-i/roots-and-radicals/operations-with-square-roots
# Operations with Square Roots You can perform a number of different operations with square roots. Some of these operations involve a single radical sign, while others can involve many radical signs. The rules governing these operations should be carefully reviewed. #### Under a single radical sign You may perform operations under a single radical sign. ##### Example 1 Perform the operation indicated. #### When radical values are alike You can add or subtract square roots themselves only if the values under the radical sign are equal. Then simply add or subtract the coefficients (numbers in front of the radical sign) and keep the original number in the radical sign. ##### Example 2 Perform the operation indicated. Note that the coefficient 1 is understood in . #### When radical values are different You may not add or subtract different square roots. #### Addition and subtraction of square roots after simplifying Sometimes, after simplifying the square root(s), addition or subtraction becomes possible. Always simplify if possible. ##### Example 4 1. These cannot be added until is simplified. Now, because both are alike under the radical sign, 2. Try to simplify each one. Now, because both are alike under the radical sign, #### Products of nonnegative roots Remember that in multiplication of roots, the multiplication sign may be omitted. Always simplify the answer when possible. ##### Example 5 Multiply. 1. If each variable is nonnegative, 2. If each variable is nonnegative, 3. If each variable is nonnegative, #### Quotients of nonnegative roots For all positive numbers, In the following examples, all variables are assumed to be positive. ##### Example 6 Divide. Leave all fractions with rational denominators. Note that the denominator of this fraction in part (d) is irrational. In order to rationalize the denominator of this fraction, multiply it by 1 in the form of ##### Example 7 Divide. Leave all fractions with rational denominators. 1. First simplify : or Note: In order to leave a rational term in the denominator, it is necessary to multiply both the numerator and denominator by the conjugate of the denominator. The conjugate of a binomial contains the same terms but the opposite sign. Thus, ( x + y) and ( xy) are conjugates. ##### Example 8 Divide. Leave the fraction with a rational denominator.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450463056564331, "perplexity": 1114.56810625549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00060-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=424904
# phosphoric acid titration calculation by tq1088 Tags: phosphoric acid P: 4 1. The problem statement, all variables and given/known data The problem is that i am trying to find out the concentration of phosphoric acid in cola. I have completed the titration of 3 different types of cola. I have obtained the pH and volume on the titration graph. Note: I am trying to figure out the overall concentration of H3P04 first, not the H+ ions. I will do that later, but first i need to find the concentration of the H3P04. I have results like: 1st equivalence point: pH is 4.87 and Volume is 2.607 When 2.607ml of sodium hydroxide was added, the pH at the first equivalence point is 4.87. The same follows for the second equivalence point. 2nd equivalence point: pH is 9.27 and Volume is 6.983. Btw, the concentration of the sodium hydroxide is 0.1M and the amount of cola used is 50ml 3. The attempt at a solution Note: The below example is with different values...from above I have already done some calculating where i used a 3:1 ratio to find the concentration of the 1st and 2nd equiv point, but i think these are incorrect. For example i did this: NaOH --> Volume is 1.964 (0.001964L) and Concentration is 0.1M H3P04 --> Volume = 50ml (0.05L) and concentration is unknown. Ratio is 3:1 So 0.001964 x (1/3) = 6.54 X 10^-4 So H3P04 concentration is 6.54 x 10^-4 / 0.05 = 0.013M I did the same for the 2nd equiv point but i used 3.286ml (0.003286ml) instead and i got 0.022M I dont think this is correct because i need to find the overall concentration of the phosphoric acid. This is just one example of mine. I have much more results. If someone could help me calculate the concentration of the acid in the cola, i would really appreciate it. Thanks 2. Relevant equations This is the overall equation. 3NaOH + H3PO4 ->Na3PO4 +3H2O Admin P: 22,712 What has been neutralized at the first equivalence point? Write reaction equation - is it 3:1? Do the same for the second equivalence point. -- P: 4 1st equiv point H3PO4 (aq) --> H+ (aq) + H2PO4− 2nd equiv point H2PO4 (aq) --> H+ (aq) + H2PO4 2− 3rd equiv point HPO4 2− (aq) --> H+ (aq) + PO43− But the 3rd doesn't really matter because it didn't show up on the titration graph. The overall equation is H3PO4 (aq) + 3NaOH (aq) --> Na3PO4 (aq) + H2O (l) 3:1 because the reaction occurs between 3 of NaOH and 1 of H3P04 and im trying to find the overall concentration of the H3P04 and i got 2 equiv points on the titration graph. P: 4 ## phosphoric acid titration calculation Like the amount of NaOH needed to reach the 1st equiv point is 1.964ml. The pH was read at 5.14 Then to reach the 2nd equiv point, 3.286 ml of NaOH is needed. The pH was read at 9.17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060246348381042, "perplexity": 2141.027081745483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/latex-help/187379-proper-symbols-r-z-c-etc-print.html
# 'Proper' symbols for R, Z, C etc • September 6th 2011, 05:30 AM Bernhard 'Proper' symbols for R, Z, C etc I notice some posts an Linear and Abstract Algebra have the 'proper' symbols for R, Z, C etc - how is this done? Peter • September 6th 2011, 05:40 AM alexmahone Re: 'Proper' symbols for R, Z, C etc Quote: Originally Posted by Bernhard I notice some posts an Linear and Abstract Algebra have the 'proper' symbols for R, Z, C etc - how is this done? Peter \mathbb{R} gives $\mathbb{R}$ etc. • September 6th 2011, 05:56 AM CaptainBlack Re: 'Proper' symbols for R, Z, C etc Quote: Originally Posted by Bernhard I notice some posts an Linear and Abstract Algebra have the 'proper' symbols for R, Z, C etc - how is this done? Peter They are not "proper" symbols for these things, the proper symbols (if that has any meaning) are just bold R, Z, C ... What you are referring to are the "black board bold" symbols which are a type face which mimics how the standard bold characters are written on a black board. But then these seem to be more or less standard, and someone else has already told you how to generate them. CB • September 6th 2011, 06:27 AM Prove It Re: 'Proper' symbols for R, Z, C etc \mathbf{R} gives $\displaystyle \mathbf{R}$. The "handwritten" version that looks like $\displaystyle \mathbb{R}$ was just invented by those who were writing it to denote superbold font for the printers, so the superbold form is the most correct. • October 16th 2011, 07:10 AM Deveno Re: 'Proper' symbols for R, Z, C etc an amusing tale of a font modelled after a blackboard approximation of yet another font, that is slowly replacing the font it was once just a "stand-in" for. oh the back-biting backwaters of math symbolism!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609183669090271, "perplexity": 3812.439378076204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160958.15/warc/CC-MAIN-20160205193920-00097-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/108556/given-an-integer-n-0-how-many-ways-can-we-express-n-as-the-sum-of-three-na
# Given an integer $n >0$, how many ways can we express $n$ as the sum of three natural numbers $n_1,n_2,n_3$ ? Given an integer $n >0$, how many ways can we express $n$ as the sum of three natural numbers $n_1,n_2,n_3$ ? - That definition doesn't define a function. –  Thomas Andrews Feb 12 '12 at 17:54 I do not really understand what you mean by "cardinality of $f(n)$" as this is just one element of $\mathbb{N}^3$. If you want to phrase your question with the help of a function then you are more likely asking for the cardinalities of the fibers of the map $g: \mathbb{N}^3 \rightarrow \mathbb{N}, (n_1,n_2,n_3) \mapsto n_1+n_2+n_3$. –  Matthias Klupsch Feb 12 '12 at 17:56 The function made things confusing, I have edited the question –  Freeman Feb 12 '12 at 17:57 Judging by the comments and other answer I think I might be missing out on something. yet- This might still be right, so I post this answer, please let me know if I'm horribly wrong :-) I think we this question can be reformulated in the following manner: Suppose we have $n+2$ balls, $n$ of which are white, and the other two are black. Now, each different way in which you order the $n+2$ balls gives you a different partition of $n$ in to $3$ natural numbers ($0$ included)- just count how many white balls are between any two black ones. Moreover- any partition of $n$ to naturals $n_1+n_2+n_3=n$, can be visualized as an ordering of the above $n+2$ balls: Just put the first $n_1$ white balls in a row, followed by a black one, then the next $n_2$ white balls, followed by a black one and then the last $n_3$ white ones. So the question reduces to how many ways can you arrage $n$ white balls, and two black ones in a row- This is easily seen to be $\binom{n+2}{2}$. (once the positions for the two black ones has been set, the partition is determined) - Clever! This is definitely correct. In my simplex answer, I mistakenly assumed that he wanted $n_1,n_2,n_3 > 0$, so it would give a different answer, but still, the answer would probably reduce to something quite simple, and would match a similar combinatorial formula. –  Lieven Feb 12 '12 at 18:16 This is very enlightening, thanks for making the effort to make this understandable. –  Freeman Feb 12 '12 at 19:08 @LHS: sure thing :) –  kneidell Feb 12 '12 at 23:09 @LHS: This answer assumes that you find (1,2,3) different from (3,2,1)-that is, you are using ordered partitions. If you want ordered partitions and require all the numbers be >0, you just have $\binom {n-1}{2}$. The same argument holds, but you can't put black balls at the ends or next to each other. –  Ross Millikan Feb 13 '12 at 1:35 I mistakenly assumed that you wanted $n_1,n_2,n_3 > 0$, so the following answer is not correct. I won't remove it, since I think it might still be interesting. I hope I understood you correctly, here is my try $$f(n) = \begin{cases} (0,0,0), & n=0 \\ (0,0,1), & n=1 \\ (0,0,2), & n=2 \\ \Big(\frac{n-a}{3},\frac{n-a}{3},\frac{n-a}{3}+a\Big), & n>2 \end{cases}$$ Here $n\equiv a\mod 3$. Also, there are $\tbinom{3 + n - 1}{n}$ ways, by http://en.wikipedia.org/wiki/Stars_and_bars_%28probability%29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654729723930359, "perplexity": 200.80034632004092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
http://papers.nips.cc/paper/3725-bayesian-belief-polarization
# NIPS Proceedingsβ ## Bayesian Belief Polarization [PDF] [BibTeX] [Supplemental] ### Abstract Situations in which people with opposing prior beliefs observe the same evidence and then strengthen those existing beliefs are frequently offered as evidence of human irrationality. This phenomenon, termed belief polarization, is typically assumed to be non-normative. We demonstrate, however, that a variety of cases of belief polarization are consistent with a Bayesian approach to belief revision. Simulation results indicate that belief polarization is not only possible but relatively common within the class of Bayesian models that we consider.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504649639129639, "perplexity": 2366.9591241768862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00246-ip-10-233-31-227.ec2.internal.warc.gz"}
https://kubicle.com/learn/statistical-analysis/calculating-normal-probabilities
# 1. Calculating Normal Probabilities Overview An important skill in hypothesis testing is being able to identify probabilities from a distribution. In this lesson, we’ll learn how to calculate probabilities using a normal distribution, which is used in many hypothesis tests. To explore more Kubicle data literacy subjects, please refer to our full library. Summary 1. Lesson Goal (00:19) The goal of this lesson is to calculate probabilities using the normal distribution. 2. Overview of the Problem (00:26) The problem in this lesson relates to scores on an exam taken by many students, where the population mean is 240 and the standard deviation is 50.. Our aim is to calculate the probability of various possible values from this distribution. 3. Calculating a Probability from the Distribution (00:39) The first problem requires us to calculate the probability of observing a value above 300. This is equivalent to the area to the right of 300 under the normal distribution. The area to the left of 300 represents the probability of observing a value below 300. To find this area, we first need to calculate the Z-Score for this observation, to transform it to a point on the standard normal distribution. Next, we use a Z-table to identify the area below this Z-Score. The table we use for this purpose can be found here. We use the rows and columns to identify the Z-Score of interest, then find the corresponding probability in the body of the table. The table gives us the probability of observing a value below 300, and we can subtract this probability from one to find the probability of a score above 300. Note that we can also use statistical software to find these probabilities instead of a Z-table, but we use the table in this lesson so that you fully understand how the process works. 4. Calculating the Probability Within a Range (02:39) We can find the probability between two values by finding the probabilities to the left of the individual values, then finding the difference between these probabilities. For example, to find the probability of observing a value between 200 and 280, we find the probability of observing a value less than 200, and the probability of observing a value less than 280 using the same technique as before. We then find the difference between these probabilities, which represents the probability of observing a value between 200 and 280. 5. Finding the Value for a Probability (06:05) Using the normal distribution, we can also find the value from the distribution which has a certain probability above and below it. For example, we aim to find the test score with a probability of 0.9 of being below it. To do this, we find the appropriate Z-Score by locating the value of 0.9 in the Z-table, and reading the corresponding Z-Score. We then use the Z-Score formula to identify the value from the normal distribution that corresponds to this Z-Score. Transcript In this course, we'll learn about the statistical technique of hypothesis testing. We'll learn about the basic principles of hypothesis testing, and we'll see how to conduct a wide variety of different hypothesis tests. Our goal in this lesson is to calculate probabilities using the normal distribution. We'll consider the example of a school test taken by a very large number of students each year. Because such a large number of students take the test, we believe the scores follow a normal distribution. The maximum score on the test is 400 points. We've analyzed data for all the students that took the test and found that the average score is 240 and the standard deviation is 50. We want to use this information to solve various probability problems. We'll first consider the score of a randomly selected student, which will represent as the random variable X. This student hopes to score 300 points or more. We can use the normal distribution to find the probability of this score. This will be equivalent to the area under the normal distribution to the right of 300. To find this area, we'll find the Z-score for an observation of 300 points. We'll then use a standard normal table, to find the probability of observing a value less than the Z-score, and subtract this value from one, to find the probability of observing a higher value than the Z-score. Let's start by calculating the Z-score for a test score of 300. We'll subtract the mean of 240 from 300 and divide by the standard deviation of 50 to get a Z-score of 1.2. If we look at a standard normal distribution, our new objective is to find the area to the left of 1.2 on this distribution. This will be the same as the area to the left of 300 on the previous normal distribution. To find this area, we have two options. We can use statistical software like Excel, or we can look up a standard normal table. Using software is generally easier, but in this lesson, we'll demonstrate how to use a table. It's worth noting there are two types of standard normal table. The first type gives the area to the left of any value of interest, while the second gives the area between zero and the value of interest. In this lesson, we'll consider the first type of table as it's generally easier to use. Let's now look up the standard normal table. This table can easily be found online, and you'll find a link to the table we're using in the summary below this lesson. We can see there are two tables, one for negative Z-scores, and one for positive Z-scores. Each of these tables shows the area to the left of the relevant Z-score in a standard normal distribution. For example, we want to find the area for a Z-score of 1.2 or 1.20. We'll go to the positive Z-score table, find the row for 1.2 and the column for .00. At this intersection, we find a value of 0.8849. This tells us that the area to the left of a Z-score of 1.2 in a standard normal distribution is 0.8849. If we subtract this from one, we find the area above this point is 0.1151. If we return to our original normal distribution, we can also say that the area to the left of 300 is 0.8849, and the area to the right is 0.1151. This tells us that the probability of a randomly selected student scoring below 300 is 0.8849, and the probability of this student scoring above 300 is 0.1151. Let's look at a second problem. In this case, we want to identify the probability that a randomly selected student scores between 200 and 280 points. To do this, we'll find the area to the left of 280 points and then subtract the area to the left of 200 points. Let's find the Z-scores for test scores of 200 and 280. Using the same formula as before, we find the Z-score for 200 is negative 0.8, and the Z-score for 280 is positive 0.8. Now let's return to our standard normal table and find the areas for these two Z-scores. For a Z-score of negative 0.8, we go to the negative Z-score table, find the row for negative 0.8 and the column for .00, which tells us the area is 0.2119. Next we'll go to the positive Z-score table, find the row for 0.8 and the column for .00, which tells us the area is 0.7881. We can now see that the probability of a random student scoring less than 280 is 0.7881, and the probability they will score less than 200 is 0.2119. If we subtract 0.2119 from 0.7881, we find that the probability of scoring between 200 and 280 is 0.5762. Finally, let's look at a different type of problem. Let's imagine a particular college wants to accept only students that score in the top 10%, and want to know what score they should set as their entry requirement. In effect, we want to find the test score where the probability of being above it is 0.1, and the probability of being below it is 0.9. Instead of looking in the row and column headings, we'll search in the body of the table for a probability of 0.9. The closest value we can find is 0.8997. This is in the 1.2 row and the .08 column, meaning it corresponds to a Z-score of 1.28. This tells us that the area to the left of 1.28 on a standard normal distribution is approximately 0.9. Next, we need to find what value this equates to on a distribution of test scores. We can do this using the Z-score formula we've used before. As you can see, here we know the value of Z, and we want to find the value of X. If we rearrange the formula and complete the math, we find the value for X is 304. This tells us that the probability of a random student scoring less than 304 is 0.9, and the probability of a student scoring more than 304 is 0.1. This completes our look at calculating probabilities from the normal distribution. As we've seen, we can generate a lot of insights from a normal distribution simply by knowing the mean and standard deviation and having access to a standard normal table. In the rest of this course, we'll apply some of the skills we've acquired in this lesson to a variety of different hypothesis tests. In the next lesson, we'll start by learning how to set up a hypothesis test. Applications of Probability Hypothesis Testing Contents 08:23 06:43 06:52 08:04 05:34 06:54 07:11 06:43 #### 9. Running an Analysis of Variance (ANOVA) 07:07 My Notes You can take notes as you view lessons. Sign in or start a free trial to avail of this feature.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861824870109558, "perplexity": 245.7444423423188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00780.warc.gz"}
https://www.physicsforums.com/threads/system-of-homogeneous-equations.803621/
# System of homogeneous equations Tags: 1. Mar 17, 2015 I got three equations: l-cm-bn=0 -cl+m-an=0 -bl-am+n=0 In my textbook, its written "eliminating l, m, n we get:" $$\begin{vmatrix} 1& -c& -b\\ -c& 1& -a\\ -b& -a& 1\\ \end{vmatrix}=0$$ but if I take l, m, n as variables and since $l=\frac{\Delta_1}{\Delta}$ (Cramer's rule) and $\Delta_1=0$, then if $\Delta=0$,you get an indeterminate form. Is the expression given in my textbook correct? 2. Mar 17, 2015 ### SteamKing Staff Emeritus It's not clear what is going on in your text book. To me, it looks like l, m, and n are the unknown variables for this system. If elimination were correctly carried out on the matrix of coefficients, then you would be left with only 1's on the main diagonal and only zeros to the lower left of the main diagonal. In any event, the solution of a system homogeneous linear equations requires special consideration. If the determinant of the matrix of coefficients is not equal to zero, then l = m = n = 0 is the only solution to the system. If the determinant of the matrix of coefficients equals zero, there is an infinite number of solutions. http://en.wikipedia.org/wiki/System_of_linear_equations 3. Mar 17, 2015 "If the planes x=cy+bz, y=az+cx and z=bx+ay pass through a line, then the value of $a^2+b^2+c^2+2abc=$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339603185653687, "perplexity": 338.15751410122095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00168.warc.gz"}
https://www.nature.com/articles/s41467-018-03794-9?error=cookies_not_supported&code=aa22a124-7f2c-4b52-84de-8e07658b2b8f
Article | Open | Published: # Experimental determination of the energy difference between competing isomers of deposited, size-selected gold nanoclusters ## Abstract The equilibrium structures and dynamics of a nanoscale system are regulated by a complex potential energy surface (PES). This is a key target of theoretical calculations but experimentally elusive. We report the measurement of a key PES parameter for a model nanosystem: size-selected Au nanoclusters, soft-landed on amorphous silicon nitride supports. We obtain the energy difference between the most abundant structural isomers of magic number Au561 clusters, the decahedron and face-centred-cubic (fcc) structures, from the equilibrium proportions of the isomers. These are measured by atomic-resolution scanning transmission electron microscopy, with an ultra-stable heating stage, as a function of temperature (125–500 °C). At lower temperatures (20–125 °C) the behaviour is kinetic, exhibiting down conversion of metastable decahedra into fcc structures; the higher state is repopulated at higher temperatures in equilibrium. We find the decahedron is 0.040 ± 0.020 eV higher in energy than the fcc isomer, providing a benchmark for the theoretical treatment of nanoparticles. ## Introduction The structure and dynamics of a nanosystem are controlled by the multi-dimensional potential energy surface (PES), which describes its free energy as a function of configuration. There have been considerable theoretical efforts to determine the ground-state structures and energy differences between competing isomers of nanosytems in general1,2,3 and of nanoclusters in particular4,5,6,7,8. Gold clusters have received much theoretical attention due to the role of structure in the catalytic performance9. What is needed now is an experimental handle on key parameters of the PES. Understanding the energy difference between structural isomers is important not only for the design of well-defined materials but also for understanding how these materials will work in situ. For example, if a particular structural isomer is unstable, exposure to high temperatures is likely to drive it towards the ground state (i.e. annealing), altering (for better or worse) the characteristics of the system. Such behaviour is likely to be relevant to the applications of nanoparticles, which include catalysis10,11, drug delivery12,13 and chemical sensing14. Experimentally the atomic structure of nanoclusters can be determined, to various degrees, by a number of techniques including trapped ion electron diffraction15, x-ray scattering16, transmission electron microscopy (TEM) tilt series17 and high-angle annular dark-field (HAADF) aberration-corrected scanning transmission electron microscopy (ac-STEM)18. However, the cluster formation conditions can easily lead to the trapping of higher lying isomers and the populations of cluster isomers observed do not represent thermal equilibrium19. Previous STEM studies have provided some qualitative insight into the PES of clusters through e-beam transformation experiments. By continual imaging during intense irradiation, Au56120 and Au92319 clusters (on carbon) have been shown to transform one-way to lower energy structures19, while smaller clusters fluctuate continually21,22,23. Such experiments enable candidate low energy structures to be identified. However, these experiments do not provide the quantitative energy difference between isomers. Ex situ annealing experiments by Koga et al.24 found that annealing of small and medium sized (<14 nm) Au clusters below the melting point (<1273 K) resulted in structural transformations, but no quantitative measure of energy differences or barrier heights could be made. Here we employ a precision heating stage in ac-STEM to determine in situ the proportion of structural isomers for size-selected Au561 clusters, deposited on amorphous silicon nitride, over a range of temperatures. This enables the energy difference between competing fcc and Dh isomers in the equilibrium region to be extracted for Au561 on the surface. We identify two regimes: a low-temperature regime in which metastable (kinetically trapped) Dh clusters transform to fcc, and a high-temperature regime in which the Dh isomer is repopulated (Boltzmann statistics); here the system is in thermal equilibrium. From the equilibrium, high-temperature region data, we find that the Dh and fcc isomers are very close in energy, where the Dh are only 0.040 ± 0.020 eV higher than those of fcc. ## Results ### Electron microscope images Figure 1 shows examples of HAADF STEM images of Au561 clusters on amorphous silicon nitride and corresponding multi-slice simulations from a simulation atlas19. Figure 1a and b was recorded at 20 °C. Figure 1a shows an fcc cluster and Fig. 1b shows a decahedral cluster. Figure 1c and d was recorded at 500 °C. Figure 1c shows an fcc cluster and Fig. 1d an on-axis decahedron. Both decahedra in Fig. 1b and d show some Marks reentrant features. In comparison of experimental and simulated images, we concentrate on the core atomic structure because this is where the signal-to-noise levels are the highest, so that we can compare them with simulations of perfect cuboctahedra and Ino-decahedra. HAADF STEM images matched to the cuboctahedron simulations are denoted face-centred-cubic (fcc), which allows for variation in the exact surface truncation; similarly images matched to the Ino-decahedron are denoted decahedra (Dh). Clusters that display ‘ring-dot’ features in the images, a characteristic of an icosahedron, are denoted simply as icosahedra (Ih). ### Proportions of different isomers Figure 2a is a plot of the proportions of structural isomers, extracted from the fits to the experimental data, for Au561 clusters on amorphous silicon nitride at temperatures ranging from 20 °C to 500 °C. The same sample was used for all measurements so that formation conditions would not affect the results25. Cluster structures are identified as either fcc, Dh, Ih or unidentified/amorphous (UI/A). The error bars on the proportions of structural isomers are statistical counting errors and the error on the temperature is 5%, due to the heating chip calibration. At all temperatures investigated the most abundant isomer is fcc, followed by Dh, while Ih has a very low abundance (0–3%). We find that the clusters still provide a good match with the simulated structures at high temperature and there is no evidence of melting in the temperature range explored here, as can be seen from Fig. 1. The percentage of unidentified or unknown (UI/A) structures—clusters that are amorphous or cannot be identified using simulation atlases for the Ino-decahedron, cuboctahedron or icosahedron—is fairly constant across the temperature range. One explanation for such images is that only single-shot data was taken (to minimise the electron dose), and clusters often rotate during scanning. Figure 2b shows a plot of the ratio of the two most abundant ordered isomers, Dh and fcc, versus temperature. Two distinct temperature regimes are clearly visible. Between 20 °C and 125 °C the Dh:fcc ratio decreases from 0.81 to 0.24, whereas between 125 °C and 500 °C the Dh:fcc ratio increases from 0.24 to 0.45. The underlying and associated errors are derived from Fig. 2a. Between 20 °C and 150 °C the increase in temperature results in an increase in the abundance of the fcc isomer, but at temperatures ≥150 °C the proportion of fcc gradually decreases again. Complementary to this, between 20 °C and 125 °C, the proportion of Dh decreases, whereas at temperature ≥125 °C there is a slight increase in Dh as temperature rises. The increase in the proportion of fcc clusters from 20 °C to 125 °C, and the corresponding decrease in the proportion of Dh, can be explained in terms of the release of trapped metastable Dh structures to a lower free energy fcc structure. We previously reported that Au561 clusters undergo a one-way transition from Dh to fcc when continuously exposed to the STEM electron beam at very high magnification20, which corresponds to moderate heating of the sample. However, the behaviour we observe takes on a new character above 125 °C with the ratio of Dh to fcc increasing again. This repopulation behaviour can be understood if the fcc structure is a lower free-energy structure than the Dh. Then, beyond the release of kinetically trapped Dh clusters by annealing at temperatures from 20 °C to 125 °C, we may expect that an equilibrium distribution of isomers will be established at higher temperatures. A proportion of the clusters (based on Boltzmann statistics) will be excited from the fcc to the higher energy Dh structure26. In fact, if we assume equilibrium between isomers of energy EDh and Efcc, we obtain the ratio between the probabilities pDh and pfcc of the corresponding structures given by (see the Supplementary Note 1 for a derivation of this formula) $ln p Dh ∕ p fcc =β ( E fcc - E Dh ) +c$ (1) where β = (kBT)−1. In this system the Ih must have much higher energy, as we do not see repopulation of this isomer even at 500 °C; this is in agreement with experimental observations of Ih Au923 clusters under the electron beam, which transformed to Dh or fcc structures after very short exposure times19. If the increase in the proportion of Dh clusters in the high-temperature region is a result of thermal repopulation of this excited state, the energy difference between the Dh and fcc structural isomers can be derived, as we show below. A second hypothetical explanation for the change in ratio is that, as the temperature of the clusters increase, atoms are lost through sublimation resulting in a smaller cluster size at higher temperatures where the decahedron might in principle be more stable. However, based on analysis of the diameters of the clusters at 500 °C (Fig. 1), we are confident that no major loss of atoms has occurred. Figure 3 shows a plot of the natural log of the ratio of the Dh and fcc abundances as a function of the reciprocal of the absolute temperature. From Eq. (1), the slope of the line in the higher temperature equilibrium regime gives the energy difference between the local minima of the two competing isomers, whereas the intercept gives the entropy difference (see Supplementary Note 1 for detailed explanation). This does not apply to the low temperature, kinetic regime. The dashed line shows a weighted linear least squares fit to the high-temperature region (398–773 K) of the plot. The gradient of this line is −510 ± 240 K, which corresponds to a value of 0.040 ± 0.020 eV (E = kBT) for the energy difference between Dh and the lower lying fcc isomers (ΔEDh–fcc). The intercept c = −0.2 ± 0.4 is the entropy difference in units of kB (Supplementary Note 1), which indicates a negligible entropy difference between these structures. ## Discussion There are two key assumptions that underpin these new results. First, the partition function for each isomer is given by the harmonic superposition approximation, in which the vibrational frequencies are assumed to be harmonic and independent of temperature. In many cases this approximation has been shown to be valid for temperatures below the melting point8. If the vibrational frequencies are anharmonic, there would be a temperature dependence8, possibly resulting in non-linearity in the plot of ln(Dh/fcc) versus 1/T. Secondly, we have assumed that for each basin (Dh-basin, fcc-basin) in the PES of the cluster, there is only a contribution from one structural isomer. In the experiment we have a small range of cluster sizes (determined by the mass filter resolution), and within the classification of Dh or fcc there may be different truncations and arrangements of atoms on the surface that are not easily distinguished by our simulation atlas method. If whole ‘families’ of Dh and fcc isomers are being observed experimentally, the energy difference determined may represent a sort of weighted average of the energy differences between the Dh and fcc clusters in the families. However, in this case there would not be any compelling reason for obtaining the linear increase shown in Fig. 3 for high temperatures. The derived energy difference of 0.04 eV between the Dh and fcc isomers is very small (corresponding to only ≈510 K), and means that at the cluster size investigated, 561 ± 14, these isomers compete very closely. Of course it is the closeness in energy that makes the energy difference nicely measureable in our experiment, which probes a temperature range of 375 °C (125–500 °C). Regarding the derived energy difference, it would be appealing at this point to produce a theoretical calculation that predicts an energy offset comparable with the experimental value. But the truth is that no calculations currently offer accuracy at the level of tens of meV for hundreds of atoms! However, the result is in broad qualitative agreement with several27,28,29 theoretical calculations that predict Dh and fcc isomers competing in energy at this size range. The original molecular dynamic simulations for Au clusters by Baletto et al.27 used the second-moment tight-binding potential for a detailed study and EAM potentials to determine general trends in the energetics of icosahedra, decahedra and truncated octahedral clusters. They found a crossover size from Dh to fcc at 500 atoms, above which the Dh and fcc isomers remain close in energy, whilst the Ih is not favoured above 100 atoms. More recently, Wang et al.28 also found that fcc is the lowest energy structure for clusters with more than 500 atoms; between 500 and 2000 atoms the truncated octahedron was their lowest energy structure followed by octahedron, truncated decahedron and Ih. In this case calculations were performed using Ino’s theory with parameters from the Sutton-Chen potential. DFT calculations performed by Li et al.29 showed that for Au561, the order of stability was fcc, Dh and Ih. In contrast to these results, Barnard et al.30 reported that, based on a thermodynamic model, the Ih was the most stable structure at room temperature, while at temperatures comparable to our equilibrium region the Dh was the most stable structure. In both cases fcc was the lowest free energy structure only for cluster sizes >15 nm. In a global optimisation study (using the RGL potential) by Göedecker et al.31, it was found that a truncated octahedral Au cluster with 201 atoms was only 0.007 eV higher in energy than a 192 atom Marks decahedral cluster. These very small differences in energy between Dh and fcc isomers are broadly consistent with our experimental observations. Given the small energy difference obtained between the two principal isomers, 40 meV, the influence of the substrate needs to be considered. As described in Note 2 of the Supplementary Information, we have conducted an experimental investigation of the same isomers of Au561 but this time on an amorphous carbon support. The behaviour observed is similar: an annealing regime followed by an equilibrium regime; the fcc structure has the lowest energy; the Dh is, in this case, found to lie 20 meV higher in energy. We conclude that the method reported has general applicability to different systems and that the change of support does not markedly alter the relative energies. Another question is: does the surface switch the relative stability of the two isomers compared with the free clusters? Without any experimental data from the gas phase one cannot be sure, but we have conducted a theoretical treatment of the substrate effect (Supplementary Note 3). This shows that a model of the carbon surface has a tendency to stabilise the fcc isomer more than it does the Dh isomer, which relates to the facet sizes in contact with the support. Thus it is possible that the favoured configuration of a free cluster could switch on the surface from Dh to fcc. In summary, we have demonstrated a method to obtain experimentally a critical parameter in the PES of a model nanosystem. Specifically, we have reported the proportions of competing structural isomers as a function of temperature in a population of model size-selected Au561 clusters, soft-landed on amorphous silicon nitride. The approach employs atomic-resolution imaging with an ultrastable heating stage in the aberration-corrected STEM. Two distinct kinds of behaviour have been identified. In the low-temperature region, from 20 to 125 °C, there is a decrease in the Dh:fcc ratio, attributed to the transformation of kinetically trapped metastable Dh into lower energy fcc structures. In the higher temperature region, from 125 to 500 °C, the Dh:fcc ratio increases; the Dh isomer is repopulated because the system is in equilibrium. The measured equilibrium populations enable us to determine the energy difference between the two isomers. We find that the Dh isomer is 0.040 ± 0.020 eV higher in energy than the fcc for Au561±14. Ultimately, such quantitative parameters of the PES allow for a direct comparison with, and benchmark of, theoretical treatments and thus a new insight into the equilibrium structures and dynamics of nano-systems. ## Methods ### Cluster deposition Au clusters consisting of 561 ± 14 atoms were produced with a magnetron sputtering gas aggregation cluster beam source32, incorporating a lateral time of flight mass filter (MM = 20)33. The clusters were deposited onto the amorphous silicon nitride films of the heating chips in the soft-landing regime (<2 eV/atom)34 to preserve their original atomic structure. ### Electron microscopy A JEOL 2100F STEM with spherical aberration probe corrector (CEOS) was employed for atomic-resolution imaging of the nanoclusters. The convergence angle was 19 mrad, and the inner and outer HAADF detector collection angles were 62 mrad and 164 mrad respectively. In situ heating was performed using a heating holder with MEMS-based heating chips (DENS Solutions). The chips consist of a metal heater coil embedded in silicon nitride, surrounded by imaging windows of amorphous silicon nitride. A current is applied to the metal coil to heat the chips: the temperature comes from the resistance measured in situ using the four-point probe method, with chip calibration performed by the supplier. The error on the temperature measurement is 5% (a potential systematic error) and the temperature stability <1 °C. Experiments were conducted by setting the temperature to a chosen value and taking single-shot HAADF STEM atomic-resolution images of a population of clusters. The temperature was increased incrementally (from 20 °C to 500 °C) and at each temperature ≥100 clusters were imaged. The atomic structures of the individual clusters were then identified by comparison with multi-slice electron scattering simulations of the (unsupported) cuboctahedron, Ino-decahedron and icosahedron isomers at different orientations (polar and azimuthal) using the QSTEM package and the simulation atlas method19. ### Data availability All data is available from the authors upon reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Topsakal, M., Aktürk, E. & Ciraci, S. First principles study of two- and one-dimensional honeycomb structures of boron nitride. Phys. Rev. B 79, 115442 (2009). 2. 2. Szwacki, N. G., Sadrzadeh, A. & Yakobson, B. I. B80 fullerene: an ab intitio prediction of geometry, stability and electronic structure. Phys. Rev. Lett. 98, 166804 (2007). 3. 3. Li, C., Guo, W., Kong, Y. & Gao, H. First-principles study of the dependence of ground-state structural properties on the dimensionality and size of ZnO nanostructures. Phys. Rev. B 76, 035322 (2007). 4. 4. Wu, X., Wei, Z., Sun, Y., Feng, Y. L. & Liu, Q. M. Influence of the potential model parameters on the structures and potential energy surface of cobalt clusters. Chem. Phys. Lett. 660, 11–17 (2016). 5. 5. Piotrowski, M. J., Piquini, P. & Da Silva, J. L. F. Density functional theory investigation of 3d, 4d, and 5d 13-atom metal clusters. Phys. Rev. B 81, 155446 (2010). 6. 6. Gruner, M. E., Rollmann, G., Entel, P. & Farle, M. Multiply twinned morphologies of FePt and CoPt nanoparticles. Phys. Rev. Lett. 100, 087203 (2008). 7. 7. Kumar, V. & Kawazoe, Y. Evolution of atomic and electronic structure of Pt clusters: planar, layered, pyramidal, cage, cubic, and octahedral growth. Phys. Rev. B 77, 205418 (2008). 8. 8. Baletto, F. & Ferrando, R. Structural properties of nanoclusters: energetic, thermodynamic, and kinetic effects. Rev. Mod. Phys. 77, 371 (2005). 9. 9. Campbell, C. T. The active site in nanoparticle gold catalysis. Science 306, 234–235 (2004). 10. 10. Hernandez-Fernandez, P. et al. Mass-selected nanoparticles of PtxY as model catalysts for oxygen electroreduction. Nat. Chem. 6, 732–738 (2014). 11. 11. Ellis, P. R. et al. The cluster beam route to model catalysts and beyond. Faraday Discuss. 188, 39–56 (2016). 12. 12. Cully, M. Drug delivery: nanoparticles improve profile of molecularly targeted cancer drug. Nat. Rev. Drug Discov. 15, 231 (2016). 13. 13. Agasti, S. S. et al. Photoregulated release of caged anticancer drugs from gold nanoparticles. J. Am. Chem. Soc. 131, 5728–5729 (2009). 14. 14. Saha, K., Agasti, S., Kim, C., Li, X. & Rotello, V. M. Gold nanoparticles in chemical and biological sensing. Chem. Rev. 112, 2739–2779 (2012). 15. 15. Wiesel, A. et al. Structures of medium sized tin cluster anions. Phys. Chem. Chem. Phys. 14, 234–245 (2012). 16. 16. Barke, I. et al. The 3D-architecture of individual free silver nanoparticles captured by X-ray scattering. Nat. Commun. 6, 6187 (2015). 17. 17. Koga, K. & Sugawara, K. Population statistics of gold nanoparticle morphologies: direct determination by HREM observations. Surf. Sci. 529, 23–35 (2003). 18. 18. Li, Z. Y. et al. Three-dimensional atomic-scale structure of size-selected gold nanoclusters. Nature 451, 46–48 (2008). 19. 19. Wang, Z. W. & Palmer, R. E. Determination of the ground-state atomic structures of size-selected Au nanoclusters by electron-beam-induced transformation. Phys. Rev. Lett. 108, 245502 (2012). 20. 20. Wells, D., Rossi, G., Ferrando, R. & Palmer, R. E. Metastability of the atomic structures of size-selected gold nanoparticles. Nanoscale 7, 6498–6503 (2015). 21. 21. Wang, Z. W. & Palmer, R. E. Experimental evidence for fluctuating, chiral-type Au55 clusters by direct atomic imaging. Nano Lett. 12, 5510–5514 (2012). 22. 22. Wang, Z. W. & Palmer, R. E. Direct atomic imaging and dynamical fluctuations of the tetrahedral Au20 cluster. Nanoscale 4, 4947–4949 (2012). 23. 23. Ajayan, P. M. & Marks, L. D. Experimental evidence for quasimelting in small particles. Phys. Rev. Lett. 63, 279–282 (1989). 24. 24. Koga, K., Ikeshoji, T. & Sugawara, K. Size-and temperature-dependent structural transitions in gold nanoparticles. Phys. Rev. Lett. 92, 115507 (2004). 25. 25. Plant, S. R., Cao, L. & Palmer, R. E. Atomic structure control of size-selected gold nanoclusters during formation. J. Am. Chem. Soc. 136, 7559–7562 (2014). 26. 26. Weis, P., Bierweiler, T., Vollmer, E. & Kappes, M. M. +Au9: rapid isomerization reactions at 140 K. J. Chem. Phys. 117, 9293–9297 (2002). 27. 27. Baletto, F., Ferrando, R., Fortunelli, A., Montalenti, F. & Mottet, C. Crossover among structural motifs in transition and noble-metal clusters. J. Chem. Phys. 116, 3856–3863 (2002). 28. 28. Wang, B., Liu, M., Wang, Y. & Chen, X. Structures and energetics of silver and gold nanoparticles. J. Phys. Chem. C 115, 11374–11381 (2011). 29. 29. Li, H. et al. Magic number gold nanoclusters with diameters from 1 to 3.5 nm: relative stability and catalytic activity for CO oxidation. Nano Lett. 15, 682–688 (2015). 30. 30. Barnard, A. S., Young, N. P., Kirkland, A. I., van Huis, M. A. & Xu, H. Nanogold: a quantitative phase map. ACS Nano 3, 1431–1436 (2009). 31. 31. Bao, K., Goedecker, S., Koga, K., Lancon, F. & Neelov, A. Structure of large gold clusters obtained by global optimization using the minima hopping method. Phys. Rev. B 79, 041405(R) (2009). 32. 32. Pratontep, S., Carroll, S. J., Xirouchaki, C., Streun, M. & Palmer, R. E. Size-selected cluster beam source based on radio frequency magnetron plasma sputtering and gas condensation. Rev. Sci. Instrum. 76, 045103 (2005). 33. 33. Von Issendorff, B. & Palmer, R. E. A new high transmission infinite range mass selector for cluster and nanoparticle beams. Rev. Sci. Instrum. 70, 4497 (1999). 34. 34. Di Vece, M., Palomba, S. & Palmer, R. E. Pinning of size-selected gold and nickel nanoclusters on graphite. Phys. Rev. B 72, 073407 (2005). ## Acknowledgements We thank EPSRC for financial support of the experiments. D.M.F. is grateful for financial support from the EU project NanoMILE. ## Author information ### Affiliations 1. #### Nanoscale Physics Research Laboratory, School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK • D. M. Foster 2. #### Chemistry and Industrial Chemistry Department, University of Genoa, Via Dodecaneso 31, 16146, Genoa, Italy • R. Ferrando 3. #### College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, SA1 8EN, UK • R. E. Palmer ### Contributions D.M.F. conducted the experiments and analysed the data with R.F., R.E.P. initiated the experiments and supervised the work. The authors wrote the paper together. ### Competing interests The authors declare no competing interests. ### Corresponding author Correspondence to R. E. Palmer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812137246131897, "perplexity": 3029.0308105069207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00516.warc.gz"}
https://workforce.libretexts.org/Bookshelves/Electronics_Technology/Circuitry/Book%3A_I_Direct_Current_(DC)/4%3A_Scientific_Notation_And_Metric_Prefixes/4.2%3A_Arithmetic_with_Scientific_Notation
# 4.2: Arithmetic with Scientific Notation The benefits of scientific notation do not end with ease of writing and expression of accuracy. Such notation also lends itself well to mathematical problems of multiplication and division. Let’s say we wanted to know how many electrons would flow past a point in a circuit carrying 1 amp of electric current in 25 seconds. If we know the number of electrons per second in the circuit (which we do), then all we need to do is multiply that quantity by the number of seconds (25) to arrive at an answer of total electrons: (6,250,000,000,000,000,000 electrons per second) x (25 seconds) = 156,250,000,000,000,000,000 electrons passing by in 25 seconds Using scientific notation, we can write the problem like this: (6.25 x 1018 electrons per second) x (25 seconds) If we take the “6.25” and multiply it by 25, we get 156.25. So, the answer could be written as: 156.25 x 1018 electrons However, if we want to hold to standard convention for scientific notation, we must represent the significant digits as a number between 1 and 10. In this case, we’d say “1.5625” multiplied by some power-of-ten. To obtain 1.5625 from 156.25, we have to skip the decimal point two places to the left. To compensate for this without changing the value of the number, we have to raise our power by two notches (10 to the 20th power instead of 10 to the 18th): 1.5625 x 1020 electrons What if we wanted to see how many electrons would pass by in 3,600 seconds (1 hour)? To make our job easier, we could put the time in scientific notation as well: (6.25 x 1018 electrons per second) x (3.6 x 103 seconds) To multiply, we must take the two significant sets of digits (6.25 and 3.6) and multiply them together; and we need to take the two powers-of-ten and multiply them together. Taking 6.25 times 3.6, we get 22.5. Taking 1018 times 103, we get 1021 (exponents with common base numbers add). So, the answer is: 22.5 x 1021 electrons . . . or more properly . . . 2.25 x 1022 electrons To illustrate how division works with scientific notation, we could figure that last problem “backwards” to find out how long it would take for that many electrons to pass by at a current of 1 amp: (2.25 x 1022 electrons) / (6.25 x 1018 electrons per second) Just as in multiplication, we can handle the significant digits and powers-of-ten in separate steps (remember that you subtract the exponents of divided powers-of-ten): (2.25 / 6.25) x (1022 / 1018) And the answer is: 0.36 x 104, or 3.6 x 103, seconds. You can see that we arrived at the same quantity of time (3600 seconds). Now, you may be wondering what the point of all this is when we have electronic calculators that can handle the math automatically. Well, back in the days of scientists and engineers using “slide rule” analog computers, these techniques were indispensable. The “hard” arithmetic (dealing with the significant digit figures) would be performed with the slide rule while the powers-of-ten could be figured without any help at all, being nothing more than simple addition and subtraction. REVIEW: • Significant digits are representative of the real-world accuracy of a number. • Scientific notation is a “shorthand” method to represent very large and very small numbers in easily-handled form. • When multiplying two numbers in scientific notation, you can multiply the two significant digit figures and arrive at a power-of-ten by adding exponents. • When dividing two numbers in scientific notation, you can divide the two significant digit figures and arrive at a power-of-ten by subtracting exponents.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9296655058860779, "perplexity": 649.3705508085922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00480.warc.gz"}
https://brilliant.org/problems/not-so-standard/
# Not so standard Calculus Level 5 Evaluate: $\int \limits_0^\infty ( 2 + f(x) )(1- f(x) ) \frac{ \text{d}x}{x^2}$ where, $$\displaystyle f(x) = \frac{ \sin x }{x}$$. The value of the integral can be expressed as $$\displaystyle \frac{a \pi ^b }{c}$$. Given $$a$$ and $$c$$ are coprime, submit the value of $$a+ b+ c$$. Details and Assumptions: • $$\displaystyle \int \limits_0^\infty \big( f(x) \big)^2 \text{d}x = \frac{ \pi }{2}$$ • $$\displaystyle \int \limits_0^\infty \big( f(x) \big)^3 \text{d}x = \frac{3 \pi }{8}$$ • $$\displaystyle \int \limits_0^\infty \big( f(x) \big)^4 \text{d}x = \frac{ \pi }{3}$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993630588054657, "perplexity": 1610.488879638461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658681.7/warc/CC-MAIN-20190117020806-20190117042806-00356.warc.gz"}
https://vcg.iwr.uni-heidelberg.de/publications/pubdetails/Ueffinger2012FTLEbeyond/
M. Üffinger, F. Sadlo, M. Kirby, C. Hansen, T. Ertl: ## FTLE Computation Beyond First-Order Approximation In Short Paper Proceedings of Eurographics 2012, pp. 61–64, 2012. ### Abstract We present a framework for different approaches to finite-time Lyapunov exponent (FTLE) computation for 2D vector fields, based on the advection of seeding circles. On the one hand it unifies the popular flow map approach with techniques based on the evaluation of distinguished trajectories, such as renormalization. On the other hand it allows for the exploration of their order of approximation (first-order approximation representing the flow map gradient). Using this framework, we derive a measure for nonlinearity of the flow map, that brings us to the definition of a new FTLE approach. We also show how the nonlinearity measure can be used as a criterion for flow map refinement for more accurate FTLE computation, and we demonstrate that ridge extraction in supersampled FTLE leads to superior ridge quality. ### Available Files [BibTeX] [DOI] [PDF]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100866079330444, "perplexity": 1468.4089855002053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526489.6/warc/CC-MAIN-20190720070937-20190720092937-00303.warc.gz"}
https://socratic.org/questions/how-is-the-concentration-of-a-solution-measured
Chemistry Topics # How is the concentration of a solution measured? ##### 1 Answer Apr 19, 2015 $C = \frac{n}{V}$ C is the concentration n is the number of moles and V is the volume of solution So firstly you need to know what is in the solution, I'll take HCl as an example: Say you have 5g of HCl in a 500ml solution. You must first find the number of moles which is the mass in grams divide by the molar mass of HCl which would be 36.45g/mole. (1g/mole is Hydrogen and 35.45g/mole is Cl) You get that answer then you divide by your volume in litres, which is 0.5L. Your answer's unit will mol/L which is sometimes also known as just M. Should get an answer of 0.27 mol/L for this example :) Hope I helped :) ##### Impact of this question 953 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241689443588257, "perplexity": 2409.688492274206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00359.warc.gz"}
https://www.arxiv-vanity.com/papers/1501.03138/
# Subleading harmonic flows in hydrodynamic simulations of heavy ion collisions Aleksas Mazeliauskas Department of Physics and Astronomy, Stony Brook University, New York 11794, USA    Derek Teaney Department of Physics and Astronomy, Stony Brook University, New York 11794, USA May 19, 2022 ###### Abstract We perform a principal component analysis (PCA) of in event-by-event hydrodynamic simulations of Pb+Pb collisions at the Large Hadron Collider (LHC). The PCA procedure identifies two dominant contributions to the two-particle correlation function, which together capture 99.9% of the squared variance. We find that the subleading flow (which is the largest source of flow factorization breaking in hydrodynamics) is predominantly a response to the radial excitations of a third-order eccentricity. We present a systematic study of the hydrodynamic response to these radial excitations in 2+1D viscous hydrodynamics. Finally, we construct a good geometrical predictor for the orientation angle and magnitude of the leading and subleading flows using two Fourier modes of the initial geometry. ## I Introduction Two-particle correlation measurements in ultrarelativistic heavy ion collisions provide an extraordinarily detailed test of the hydrodynamic description of heavy ion events. Indeed, the measured two-particle correlations exhibit elliptic, triangular, and higher harmonic flows, which can be used to constrain the transport properties of the quark gluon plasma (QGP) produced in heavy ion collisions Heinz and Snellings (2013); Luzum and Petersen (2014). In hydrodynamic simulations of heavy-ion events, fluctuations in the initial state are propagated by the expansion dynamics of the QGP, and this expansion ultimately induces fluctuations in the momentum spectra of the produced particles. Thus, measurements of the momentum space fluctuations (or correlations) constrain the properties of the QGP expansion and the initial state. The purpose of the current paper is to classify and quantify the dominant momentum space fluctuations in (boost-invariant) event-by-event hydrodynamics, and then to optimally correlate these fluctuations in momentum space with specific fluctuations in the initial state geometry. The current paper is focused on triangular flow, since it is a strong signal and driven entirely by fluctuations Alver and Roland (2010). The corresponding studies of the other harmonics are postponed for future work. Due to flow fluctuations the correlation matrix of event-by-event triangular flows, , in hydrodynamics does not factorize Gardim et al. (2013). Factorization breaking is quantified by the parameter , (1) which must be less than unity when there are several statistically independent sources of triangular flow in the event sample Gardim et al. (2013). Factorization breaking has been studied in event-by-event hydrodynamics Gardim et al. (2013); Heinz et al. (2013); Kozlov et al. (2014) and compares reasonably to the measured data for appropriate parameters Kozlov et al. (2014). It is generally understood from these analyses that factorization breaking is caused by the hydrodynamic response to geometrical properties of the initial state that are poorly characterized by the coarse geometrical measure . For instance, in Ref. Kozlov et al. (2014) the matrix was found to be sensitive to a parameter controlling the roughness of the initial state. In Ref. Heinz et al. (2013) it was suggested that a careful study of the matrix and other observables could be used to test hydrodynamic predictions for the dependence of the event plane angle, which arises when multiple triangular flows are present in a single event. The current paper clarifies the origin of factorization breaking by associating the largest nonfactorizable contribution to the triangular flow with the hydrodynamic response to the first radial excitation in the triangular geometry. First, in Sec. II we use principal component analysis (PCA) of the harmonic spectrum to analyze the transverse momentum dependence of the third harmonic in boost invariant event-by-event hydrodynamics. PCA is a statistical technique that decomposes the flow correlation matrix into eigenvectors and eigenvalues Bhalerao et al. (2015). The procedure naturally identifies the most important contributions to flow fluctuations. Typically only two modes are needed to give an excellent description of the full covariance matrix to 0.1% accuracy. When there are only two significant eigenvectors (or triangular flow patterns), the matrix can be expressed as Bhalerao et al. (2015) r(pT1,pT2)≃1−12⎛⎝V(2)3(pT1)V(1)3(pT1)−V(2)3(pT2)V(1)3(pT2)⎞⎠2, (2) where and are the first and second eigenvectors111 As described in Sec. II, the eigenvectors are normalized to the eigenvalue , and we are assuming that .. The leading mode of the third harmonic is strongly correlated with the triangular event planeAlver and Roland (2010) , and thus is essentially equivalent to familiar measurements of with the scalar product or event plane method. However, the subleading mode is uncorrelated with the leading event plane, and is therefore projected out in most measurements of harmonic flow. Section III studies the basic properties of the subleading triangular flow, such as its dependence on centrality and viscosity. In Sec. IV.1 we show that the subleading triangular flow arises (predominantly) from the radial excitation of the triangular geometry. To reach this conclusion we first directly calculate the average geometry in the event plane of the leading and subleading flows. This averaged geometry (as explained in Sec. IV.1) is shown in Fig. 4 and exhibits a familiar triangular shape for the leading flow and a triangular shape with a radial excitation for the subleading flow. Having identified the physical origin of the subleading flow, we introduce several geometric predictors which (with various degrees of accuracy) quantitatively predict the magnitude and orientation of the subleading flow in event-by-event hydrodynamics based on the initial data, in much the way that predicts the orientation and magnitude of the leading . As a first step, in Sec. IV.2 we correlate the principal momentum space fluctuations with the Fourier modes of the geometry. Based on this analysis in Sec. IV.3 we construct a good geometrical predictor for the orientation angle and magnitudes of the leading and subleading flows based on two Fourier modes. For comparison, we also correlate the subleading flow with a linear combination of the complex and , ε3,3≡−[r3ei3ϕ]R3rms, (3a) ε3,5≡−[r5ei3ϕ]R5rms. (3b) where the square brackets denote an average over the initial entropy density in a specific event, and is the event averaged root-mean-square radius. Note that our definitions of and are chosen to make the event-by-event quantities and linear in the fluctuations, since the denominator is a constant event-averaged quantity. In this respect this definition is different from the conventional one which is a nonlinear function of the initial perturbations222We compared analogous results with and defined via cumulants Teaney and Yan (2011), and found them marginally worse than the ones presented in this paper. (see Sec. II.2 for further explanation). We find that the subleading mode is also reasonably correlated with a linear combination of these two quantities, but the quality of this predictor is considerably worse than a predictor based on two specific Fourier modes. The geometric predictors described above are ultimately based on the assumption of linear response. At least for the third harmonic (the scope of this paper), these assumptions are checked in Sec. V. In this section we explicitly compare the response to the average (“single-shot” hydrodynamics Qiu and Heinz (2011)) and the average response (event-by-event hydrodynamics). We find reasonable agreement between these two computational strategies for both the leading and subleading triangular modes. ## Ii PCA of Triangular Flow in Event-by-Event Hydrodynamics ### ii.1 Principal components PCA was recently introduced in Ref. Bhalerao et al. (2015) (which includes one of the authors) to quantify the dominant momentum space fluctuations of harmonic flows in transverse momentum and rapidity in a precise way. This section provides a brief review of this statistical technique. Paraphrasing Ref. Bhalerao et al. (2015), in the flow picture of heavy ion collisions the particles in each event are drawn independently from a single particle distribution which fluctuates from event to event. The event-by-event single particle distribution is expanded in a Fourier series dNdp=V0(pT)+∞∑n=1Vn(pT)e−inφ+H.c., (4) where notates the phase space, is the azimuthal angle of the distribution, and H.c. denotes Hermitian conjugate. is a complex Fourier coefficient recording the magnitude and orientation of the th harmonic flow. This definition deviates from the common practice of normalizing the complex Fourier coefficient by the multiplicity, . Up to non-flow corrections of order the multiplicity , the long-range part of the two-particle correlation function is determined by the statistics of the event-by-event fluctuations of the single distribution (5) If the two-particle correlation function is also expanded in a Fourier series (6) then this series determines the statistics of (7) The covariance matrix , which is real, symmetric, and positive-semidefinite, can be decomposed into real eigenvectors, VnΔ(pT1,pT2)= ∑aλaψ(a)(pT1)ψ(a)(pT2), (8) = ∑aV(a)n(pT1)V(a)n(pT2), (9) where and . As discussed above we have not normalized by the multiplicity. To make contact with previous work, we define and present numerical results for ∥v(a)n∥2≡∫(V(a)n(pT))2dpT∫⟨dN/dpT⟩2dpT=λa∫⟨dN/dpT⟩2dpT, (10) which scales with multiplicity and in the same way as an integrated measurement. Typically in event-by-event hydrodynamics (as shown below) the eigenvalues are strongly ordered, and two eigenvectors describe the variance in the harmonic flow to 0.1% accuracy. Thus, PCA provides a remarkably economical description of the momentum dependence of flow fluctuations. The harmonic flow in each event can be decomposed into its principal directions, V3(pT)=ξ1V(1)3(pT)+ξ2V(2)3(pT)+…. (11) The real vectors (which do not fluctuate from event to event) record the root-mean-square amplitude of the leading and subleading flows. The complex coefficients indicate the orientation and event-by-event amplitude of their respective flows. The amplitudes of the different components are uncorrelated by construction ⟨ξaξ∗b⟩=δab. (12) The original impetus for this work was a desire to understand which aspects of the geometry are responsible for the orientation angle of the second principal component. ### ii.2 Simulations In this paper we use boost-invariant event-by-event hydrodynamics to study the principal components of for LHC initial conditions. The implementation details of the hydrodynamics code will be reported elsewhere, and here we note only the most important features. Our simulations are boost invariant and implement second order viscous hydrodynamics Baier et al. (2008), using a code base which has been developed previously Dusling and Teaney (2008); Teaney and Yan (2012). For the initial conditions we use the Phobos Glauber Monte Carlo Alver et al. (2008), and we distribute the entropy density in the transverse plane according to a two-component model. Specifically, for the th participant we assign a weight Ai≡κ[(1−α)2+α2(ncoll)i], (13) with , for , and for . is the number of binary collisions experienced by the th participant; so the total number of binary collisions is . The entropy density in the transverse plane at initial time and transverse position is taken to be s(τo,x)=∑i∈Npartssi(τo,x−xi), (14) where labels the transverse coordinates of the participant, and si(τo,x)=Ai1τo(2πσ2)e−x22σ2−y22σ2, (15) with . The parameters and are marginally different from Qiu’s thesis Qiu (2013), and we have independently verified that this choice of parameters reproduces the average multiplicity in the event.333 More precisely we have verified that for these parameters hydrodynamics with averaged initial conditions reproduces as a function of centrality after all resonance decays are included. Assuming that the ratio of the charged particle yield to the direct pion yield is the same as in the averaged simulations, the current event-by-event simulations reproduces . The equation of state is motivated by lattice QCD calculations Laine and Schroder (2006) and has been used previously by Romatschke and Luzum Luzum and Romatschke (2008). In this paper we compute “direct” pions (i.e. pions calculated directly from the freeze-out surface) and we do not include resonance decays. We use a freeze-out temperature of . Simulation results were generated for fourteen 5% centrality classes with impact parameter up to and at two viscosities, and . Unless specified, the results are for . We generated 5000 events per centrality class.444 We thank Soumya Mohapatra for collaboration during the initial stages of this project. We then performed PCA for the third harmonic by discretizing results from hydrodynamics into 100 equally spaced bins between , and finding the eigenvalues and eigenvectors of the resulting Hermitian matrix. Similar results for the other harmonics will be discussed elsewhere. Table 1 records the Glauber data which is used in this analysis. Event-by-event averages with the initial entropy density are notated with square brackets, e.g. [r2]≡1¯¯¯¯Stot∫d2xτos(τo,x)r2, (16) where is the average total entropy in a given centrality class, . Averages over events are notated with , so that the root mean square radius is Rrms≡√⟨[r2]⟩. (17) As a technical note, here and below the radius is measured from the center of entropy, so . and are defined in a somewhat unorthodox fashion in Eq. (3), with . is the averaged maximum participant radius, . As a first step, we list the (scaled) magnitudes of flows [Eq. (10)] in central collisions for the simulations described above: 1 2 3 4 Note that the quantities in this table are proportional to the square-root of the eigenvalues, . From the decreasing magnitudes of the listed (scaled) magnitudes, we see that the first two eigenmodes account for 99.9% of the squared variance, which can be represented as a sum of the eigenvalues ∫∞0dpT⟨V3(pT)V∗3(pT)⟩= ∑aλa∝∑a∥v(a)3∥2. (18) Figure 1(a) displays the eigenvectors, , for the leading and first two subleading modes. We see that only the first two flow modes are significant, and in the rest of this paper we consider only these two. To make contact with the more traditional definitions of , we divide by and present the same eigenmodes in Fig. 1(b). We also investigated the centrality and viscosity dependence of the principal components. The normalized principal flow eigenvectors are approximately independent of viscosity (not shown). In Fig. 2, we show the centrality dependence of these normalized eigenvectors. In more central collisions the eigenvectors shift to larger transverse momentum, which can be understood with the system size scaling introduced in Ref. Başar and Teaney (2014). The magnitude of the flow, i.e. the squared integral , depends on both centrality and viscosity. To factor out the trivial multiplicity dependence of , we plot the scaled flow eigenvalues [see Eq. (10)] in Fig. 3. Going from to we see significant suppression of the leading mode. In general the subleading scaled flow depends weakly on centrality. ## Iv Geometric Predictors for Subleading Flow ### iv.1 Average geometry in the subleading plane In this section, we clarify the physical origin of the subleading flow by correlating the subleading hydrodynamic response with the geometry. As a first step, we determined the average initial geometry in the principal component plane. Specifically, for each event the phase of the principal component [see Eq. (11)] defines orientation of the flow. We then rotate each event into plane and average the initial entropy density, . More precisely, the event-by-event geometry in the principal component plane is defined to be S(r,ϕ;ξa)≡132∑ℓ=0S(r,ϕ+(argξa+2πℓ)/3), (19) where we have averaged over the phases of . Next, we average over all events weighted by the magnitude of the flow ¯¯¯¯S(r,ϕ;ξa)≡⟨S(r,ϕ;ξa)|ξa|⟩. (20) Figure 4 shows the in-plane averaged geometry for the leading and subleading principal components in central collisions. Clearly, the leading principal component is strongly correlated with the triangular components of the initial geometry, while the subleading component is correlated with the radial excitations of this geometry. To give a one-dimensional projection of Fig. 4, we integrate Eq. (20) over the azimuthal angle to define ¯¯¯¯S3(r;ξa)≡∫2π0dϕ¯¯¯¯S(r,ϕ;ξa)ei3ϕ. (21) This is equivalent to defining , S3(r)≡∫2π0dϕS(r,ϕ)ei3ϕ, (22) and correlating this with the flow fluctuation ¯¯¯¯S3(r;ξa)=⟨S3(r)ξ∗a⟩. (23) Results for are shown by the blue (gray) curves in Fig. 5. Again we see that the leading flow originates from a geometric fluctuation with a large integrated eccentricity, while the subleading flow is sensitive to the radial excitation of the triangularity. Note that the relatively small subleading flow corresponds to a fairly significant fluctuation of the initial geometry. ### iv.2 The average geometry in Fourier space It is evident from Fig. 5 that the leading and subleading geometries have different characteristic wave numbers. This becomes apparent when we correlate the flow signal with the Fourier (or Hankel) transform of the triangular geometry S3(k)≡ ∫∞0rdrS3(r)J3(kr). (24) Here has the meaning of the th harmonic of the 2D Fourier transform of the initial geometry , i.e. . We recall that the is determined by the long wavelength limit of Teaney and Yan (2011) limk→0S3(k)=−¯¯¯¯Stot(kRrms/2)33!ε3,3, (25) where is the root mean square radius and is the total entropy in a given centrality bin. The constant factors are determined by the expansion of near . Motivated by this limit we define a generalized eccentricity ε3(k)≡−1¯¯¯¯Stot∫∞0rdrS3(r)[3!(kRrms/2)3J3(kr)], (26) which approaches as . Clearly in a Glauber model there is an analogous definition ε3(k)=−1¯¯¯¯¯NpartNpart∑i=1ei3ϕi[3!(kRrms/2)3J3(kri)], (27) where the coordinates of the th participant are . The Pearson correlation coefficient between the flow and a specific wave number is Qa(k)≡⟨ξaε∗3(k)⟩√⟨|ε3(k)|2⟩⟨|ξa|2⟩. (28) Examining in Fig. 6, we see that leading component is produced by low- fluctuations, while subleading flow originates from fluctuations at larger . ### iv.3 Optimal geometric predictors for the subleading flow In this section, our aim is to predict the magnitude and orientation of the leading and subleading flows. To this end we regress each principal component of the flow with various Fourier components of the initial geometry. Following Ref. Gardim et al. (2012), we construct a prediction for the flow amplitude by taking a linear combination of : ξpred=nk∑i=1ωbε3(kb). (29) The selected wave numbers are discussed in the next paragraph. The response coefficients are chosen to minimize the square error , or equivalently to maximize the Pearson correlation coefficient between the flow and the prediction minE2a=⟨|ξa−ξ% preda|2⟩, (30) maxQa=⟨ξaξ∗a% pred⟩√⟨ξaξ∗a⟩⟨ξa% predξ∗apred⟩. (31) The correlation coefficient is referred to as the quality coefficient in Ref. Gardim et al. (2012). We construct two predictors based on two and five wave numbers. For the five-term predictor we choose equidistant points which span the range seen in Fig. 6 kbRrms=1,3,5,7,9, (32) and fit the response coefficients . The two term predictor was motivated by the discrete Fourier-Bessel series advocated for in Ref. Floerchinger and Wiedemann (2014), kbRo=j3,1,j3,2Ro≃3Rrms, (33) where are the zeros of , and we select to optimize the correlation between the geometrical predictor and the flow. For comparison, we also constructed a two term linear predictor from the familiar eccentricities and defined in Eqs. (3a) and (3b). The two-wave-number fit correlates the flow with a specific projection of the triangular geometry, i.e. ξpreda∝∫∞0rdrS3(r)ρ(r), (34) where is a radial weight chosen to maximize the correlation between the flow and the projection. This is analogous to using to predict triangular flow, where the radial weight is ε3,3∝−∫∞0rdrS3(r)r3. (35) We have used Fourier modes as a basis for , ρ(r)∝ω1J3(k1r)(k1Rrms)3+ω2J3(k2r)(k2Rrms)3, (36) but other functions could have been used.555A table of is given as a function of centrality in the appendix. In Fig. 8 we compare the radial weights for the leading and subleading modes. The overall normalization of weight function is adjusted so that ⟨∣∣∣∫∞0rdrρ(r)S3(r)∣∣∣2⟩=S2tot. (37) The weight function for the leading projector is very close to cubic weight, but the subleading radial weight has a node at . Within the framework of linear response, in Sec. IV.1 we found the optimal geometry for predicting the leading and subleading flows by correlating the observed flow with the geometry, . To test if the two and five wave number predictors reproduce this optimal geometry, we formed the analogous correlator between the predicted flow and , . Examining Fig. 9, we see that the two term predictor fully captures the optimal average geometry. For peripheral collisions the optimal geometry differs from what we can construct using linear combinations of Fourier modes, suggesting that additional nonlinear physics Qiu and Heinz (2011); Gardim et al. (2012); Teaney and Yan (2012) plays a role in determining the subleading flow. Figure 7(b) also shows the correlation (or lack thereof) between the subleading flow and the integrated Q≡⟨ξ2v∗3⟩√⟨v3v∗3⟩⟨ξ2ξ∗2⟩. (38) Since the subleading mode is uncorrelated with the leading mode (by construction), there is almost no correlation between the integrated and the subleading mode. The upshot is that measurements of based on the event plane or scalar product method are projecting out the important physics of the subleading mode. ## V Testing Linear Response The success of the linear flow predictors discussed in previous section depends on the applicability of linear response. A straightforward way to check this assumption is to compare the averaged response of event-by-event hydrodynamics to the hydrodynamic response to suitably averaged initial conditions. In Sec. IV.1 we computed the average geometry in the event planes of the leading and subleading flows (see Fig. 4). It is straightforward to simulate this smooth initial condition and to compute the associated . This is known as “single-shot” hydrodynamics in the literature Qiu and Heinz (2011). In Fig. 10 we compare from the leading and subleading average geometries to the principal components and of event-by-event hydro. The qualitative features of both principal components are reproduced well by single-shot hydrodynamics, especially for the leading flow. It is particularly notable how the single-shot evolution reproduces the change of sign in . However, in an important range, , the single-shot evolution misses the event-by-event curve for subleading flow by . It is useful to examine the time development of the subleading flow in the single-shot hydrodynamics. In Fig. 11, we present three snapshots of the subleading flow evolution. The color contours show the radial momentum density per rapidity, τTτr=τ(e+p)uτur, (39) as a function of proper time . Shortly after the formation of the fireball, at we observe negative triangular flow in Fig. 11(a). This flow is produced by the excess of material at small radii flowing into the “valleys” at larger radii [see Fig. 4(b)]. However, the radial flow has not developed yet, and therefore this phase of the evolution creates negative flow at small transverse momentum. After this stage, we see typical flow evolution of a triangular perturbation, i.e. the negative geometric eccentricity at small radii is transformed into positive triangular flow at large transverse momentum [see Figs. 11(b) and (c)]. The inner eccentricity dominates over the outer eccentricity at high because the radial flow has more time to develop before freeze-out, and because there is more material at small radii. ## Vi Discussion This paper illustrates how principal component analysis can be used to understand the physics encoded in the two particle correlation matrix of hydrodynamics. PCA is an economical way to summarize the factorization breaking in these correlations. More precisely, we found that the matrix of correlation coefficients in hydrodynamics, Eq. (1), is completely described by two principal components, and as written in Eq. (2). Importantly, these components have a simple physical interpretation—they are the hydrodynamic response to two statistically independent initial conditions in the fluctuating geometry. The leading principle component is the hydrodynamic response to the participant triangularity, while the subleading flow (which is uncorrelated with the leading flow) is the hydrodynamic response to the first radial excitation of the triangularity. This conclusion was reached by averaging the event-by-event geometry in the event plane of the subleading flow (Fig. 4). The magnitude of this radial excitation is on par with the magnitude of the triangularity (Fig. 5), although the hydro response is smaller in magnitude. Since the subleading component is uncorrelated with the integrated , it is projected out in analyses of triangular flow based on the scalar product or event plane methods. We first studied the basic properties of the subleading flow such as its dependence on transverse momentum (Fig. 1), and centrality and shear viscosity (Fig. 3). The flow response is approximately linear to the geometrical deformation. This was checked by simulating the response to the average in-plane geometry with “single-shot” hydrodynamics (Fig. 11), and comparing this result to event-by-event hydrodynamics; i.e., we compared the response to the average with the averaged response (Fig. 10). Motivated by the linearity of the response, we constructed a geometrical predictor for the subleading flow analogous to . We first defined as the Fourier mode of the event-by-event triangular geometry up to normalization.666 The normalization is chosen so that . Then we constructed a linear geometrical predictor for the leading and subleading flow angles and magnitudes based on two Fourier modes (Figs. 7 and 12). Indeed, the subleading flow response is proportional to an event-by-event quantity which captures the radial excitation of the triangular geometry, ∫d2xs(τo,x)ei3ϕρ(r), (40) where is the initial entropy distribution, and is an appropriate excited radial weight function. The two term Fourier fit to is tabulated in the appendix and graphed in Fig. 8. The subleading flow probes the initial state geometry at higher wave numbers than the leading flow (Fig. 6). We found that the correlation between the flow and the Fourier components of the geometry is maximized for wave numbers away from zero, . Thus, the subleading flow provides a new test of viscous hydrodynamics and initial-state models. In peripheral collisions the correlation between the linear geometrical predictor and the flow is smaller. This suggests that nonlinear dynamics at large impact parameters couples the average elliptic geometry to the harmonic perturbations Qiu and Heinz (2011); Gardim et al. (2012); Teaney and Yan (2012). The statistical tools such as PCA and related methods developed in this work can be used to clarify this complex hydrodynamic response. Acknowledgments: We thank J. Y. Ollitrault, E. Shuryak, and J. Jia for continued interest. We especially thank S. Mohapatra for simulating hydro events. This work was supported by the Department of Energy, DE-FG-02-08ER41450. ## Appendix A Two term predictor Here we present the best fit results for the two wave number predictor, see eqs. (34), (36) and (37), kbRo=j3,1,j3,2Ro≃3Rrms. (41) Table 2 records the ratios of fit coefficients for the leading and subleading predictors. In Fig. 12 we show the correlations between the flow and its predictor for both the angles and magnitudes. The subleading flow direction correlates well with the predictor, and there is reasonable correlation for the magnitude as well.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557592272758484, "perplexity": 827.1822830942581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00750.warc.gz"}
https://unapologetic.wordpress.com/?s=basis
# The Unapologetic Mathematician ## The Character Table as Change of Basis Now that we’ve seen that the character table is square, we know that irreducible characters form an orthonormal basis of the space of class functions. And we also know another orthonormal basis of this space, indexed by the conjugacy classes $K\subseteq G$: $\displaystyle\left\{\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}f_K\right\}$ A line in the character table corresponds to an irreducible character $\chi^{(i)}$, and its entries $\chi_K^{(i)}$ tell us how to write it in terms of the basis $\{f_K\}$: $\displaystyle\chi^{(i)}=\sum\limits_K\chi_K^{(i)}f_K$ That is, it’s a change of basis matrix from one to the other. In fact, we can modify it slightly to exploit the orthonormality as well. When dealing with lines in the character table, we found that we can write our inner product as $\displaystyle\langle\chi,\psi\rangle=\sum\limits_K\frac{\lvert K\rvert}{\lvert G\rvert}\overline{\chi_K}\psi_K$ So let’s modify the table to replace the entry $\chi_K^{(i)}$ with $\sqrt{\lvert K\rvert/\lvert G\rvert}\chi_K^{(i)}$. Then we have $\displaystyle\sum\limits_K\overline{\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(i)}\right)}\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(j)}\right)=\langle\chi^{(i)},\chi^{(j)}\rangle=\delta_{i,j}$ where we make use of our orthonormality relations. That is, if we use the regular dot product on the rows of the modified character table (considered as tuples of complex numbers) we find that they’re orthonormal. But this means that the modified table is a unitary matrix, and thus its columns are orthonormal as well. We conclude that $\displaystyle\sum\limits_i\overline{\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(i)}\right)}\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_L^{(i)}\right)=\delta_{K,L}$ where now the sum is over a set indexing the irreducible characters. We rewrite these relations as $\displaystyle\sum\limits_i\overline{\chi_K^{(i)}}\chi_L^{(i)}=\frac{\lvert G\rvert}{\lvert K\rvert}\delta_{K,L}$ We can use these relations to help fill out character tables. For instance, let’s consider the character table of $S_3$, starting from the first two rows: $\displaystyle\begin{array}{c|ccc}&e&(1\,2)&(1\,2\,3)\\\hline\chi^\mathrm{triv}&1&1&1\\\mathrm{sgn}&1&-1&1\\\chi^{(3)}&a&b&c\end{array}$ where we know that the third row must exist for the character table to be square. Now our new orthogonality relations tell us on the first column that $\displaystyle1^2+1^2+a^2=6$ Since $a=\chi^{(3)}(e)$, it is a dimension, and must be positive. That is, $a=2$. On the second column we see that $\displaystyle1^2+1^2+b^2=\frac{6}{3}=2$ and so we must have $b=0$. Finally on the third column we see that $\displaystyle1^2+1^2+c^2=\frac{6}{2}=3$ so $c=\pm1$. To tell the difference, we can use the new orthogonality relations on the first and third or second and third columns, or the old ones on the first and third or second and third rows. Any of them will tell us that $c=-1$, and we’ve completed the character table without worrying about constructing any representations at all. We should take note here that the conjugacy classes index one orthonormal basis of the space of class functions, and the irreducible representations index another. Since all bases of any given vector space have the same cardinality, the set of conjugacy classes and the set of irreducible representations have the same number of elements. However, there is no reason to believe that there is any particular correspondence between the elements of the two sets. And in general there isn’t any, but we will see that in the case of symmetric groups there is a way of making just such a correspondence. November 22, 2010 ## More New Modules from Old There are a few constructions we can make, starting with the ones from last time and applying them in certain special cases. First off, if $V$ and $W$ are two finite-dimensional $L$-modules, then I say we can put an $L$-module structure on the space $\hom(V,W)$ of linear maps from $V$ to $W$. Indeed, we can identify $\hom(V,W)$ with $V^*\otimes W$: if $\{e_i\}$ is a basis for $V$ and $\{f_j\}$ is a basis for $W$, then we can set up the dual basis $\{\epsilon^i\}$ of $V^*$, such that $\epsilon^i(e_j)=\delta^i_j$. Then the elements $\{\epsilon^i\otimes f_j\}$ form a basis for $V^*\otimes W$, and each one can be identified with the linear map sending $e_i$ to $f_j$ and all the other basis elements of $V$ to $0$. Thus we have an inclusion $V^*\otimes W\to\hom(V,W)$, and a simple dimension-counting argument suffices to show that this is an isomorphism. Now, since we have an action of $L$ on $V$ we get a dual action on $V^*$. And because we have actions on $V^*$ and $W$ we get one on $V^*\otimes W\cong\hom(V,W)$. What does this look like, explicitly? Well, we can write any such tensor as the sum of tensors of the form $\lambda\otimes w$ for some $\lambda\in V^*$ and $w\in W$. We calculate the action of $x\cdot(\lambda\otimes w)$ on a vector $v\in V$: \displaystyle\begin{aligned}\left[x\cdot(\lambda\otimes w)\right](v)&=\left[(x\cdot\lambda)\otimes w\right](v)+\left[\lambda\otimes(x\cdot w)\right](v)\\&=\left[x\cdot\lambda\right](v)w+\lambda(v)(x\cdot w)\\&=-\lambda(x\cdot v)w+x\cdot(\lambda(v)w)\\&=-\left[\lambda\otimes w\right](x\cdot v)+x\cdot\left[\lambda\otimes x\right](w)\end{aligned} In general we see that $\left[x\cdot f\right](v)=x\cdot f(v)-f(x\cdot v)$. In particular, the space of linear endomorphisms on $V$ is $\hom(V,V)$, and so it get an $L$-module structure like this. The other case of interest is the space of bilinear forms on a module $V$. A bilinear form on $V$ is, of course, a linear functional on $V\otimes V$. And thus this space can be identified with $(V\otimes V)^*$. How does $x\in L$ act on a bilinear form $B$? Well, we can calculate: \displaystyle\begin{aligned}\left[x\cdot B\right](v_1,v_2)&=\left[x\cdot B\right](v_1\otimes v_2)\\&=-B\left(x\cdot(v_1\otimes v_2)\right)\\&=-B\left((x\cdot v_1)\otimes v_2\right)-B\left(v_1\otimes(x\cdot v_2)\right)\\&=-B(x\cdot v_1,v_2)-B(v_1,x\cdot v_2)\end{aligned} In particular, we can consider the case of bilinear forms on $L$ itself, where $L$ acts on itself by $\mathrm{ad}$. Here we read $\displaystyle\left[x\cdot B\right](v_1,v_2)=-B([x,v_1],v_2)-B(v_1,[x,v_2])$ September 21, 2012 ## Irreducible Modules Sorry for the delay; it’s getting crowded around here again. Anyway, an irreducible module for a Lie algebra $L$ is a pretty straightforward concept: it’s a module $M$ such that its only submodules are $0$ and $M$. As usual, Schur’s lemma tells us that any morphism between two irreducible modules is either $0$ or an isomorphism. And, as we’ve seen in other examples involving linear transformations, all automorphisms of an irreducible module are scalars times the identity transformation. This, of course, doesn’t depend on any choice of basis. A one-dimensional module will always be irreducible, if it exists. And a unique — up to isomorphism, of course — one-dimensional module will always exist for simple Lie algebras. Indeed, if $L$ is simple then we know that $[L,L]=L$. Any one-dimensional representation $\phi:L\to\mathfrak{gl}(1,\mathbb{F})$ must have its image in $[\mathfrak{gl}(1,\mathbb{F}),\mathfrak{gl}(1,\mathbb{F})]=\mathfrak{sl}(1,\mathbb{F})$. But the only traceless $1\times1$ matrix is the zero matrix. Setting $\phi(x)=0$ for all $x\in L$ does indeed give a valid representation of $L$. September 15, 2012 ## Back to the Example Let’s go back to our explicit example of $L=\mathfrak{sl}(2,\mathbb{F})$ and look at its Killing form. We first recall our usual basis: \displaystyle\begin{aligned}x&=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\\y&=\begin{pmatrix}0&0\\1&0\end{pmatrix}\\h&=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\end{aligned} which lets us write out matrices for the adjoint action: \displaystyle\begin{aligned}\mathrm{ad}(x)&=\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\\\mathrm{ad}(y)&=\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\\\mathrm{ad}(h)&=\begin{pmatrix}2&0&0\\ 0&-2&0\\ 0&0&0\end{pmatrix}\end{aligned} and from here it’s easy to calculate the Killing form. For example: \displaystyle\begin{aligned}\kappa(x,y)&=\mathrm{Tr}\left(\mathrm{ad}(x)\mathrm{ad}(x)\right)\\&=\mathrm{Tr}\left(\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\right)\\&=\mathrm{Tr}\left(\begin{pmatrix}2&0&0\\ 0&0&0\\ 0&0&2\end{pmatrix}\right)\\&=4\end{aligned} We can similarly calculate all the other values of the Killing form on basis elements. \displaystyle\begin{aligned}\kappa(x,x)&=0\\\kappa(x,y)=\kappa(y,x)&=4\\\kappa(x,h)=\kappa(h,x)&=0\\\kappa(y,y)&=0\\\kappa(y,h)=\kappa(h,y)&=0\\\kappa(h,h)&=8\end{aligned} So we can write down the matrix of $\kappa$: $\displaystyle\begin{pmatrix}0&4&0\\4&0&0\\ 0&0&8\end{pmatrix}$ And we can test this for degeneracy by taking its determinant to find $-128$. Since this is nonzero, we conclude that $\kappa$ is nondegenerate, which we know means that $\mathfrak{sl}(2,\mathbb{F})$ is semisimple — at least in fields where $1+1\neq0$. ## The Killing Form We can now define a symmetric bilinear form $\kappa$ on our Lie algebra $L$ by the formula $\displaystyle\kappa(x,y)=\mathrm{Tr}(\mathrm{ad}(x)\mathrm{ad}(y))$ It’s symmetric because the cyclic property of the trace lets us swap $\mathrm{ad}(x)$ and $\mathrm{ad}(y)$ and get the same value. It also satisfies another identity which is referred to as “associativity”, though it may not appear like the familiar version of that property at first: \displaystyle\begin{aligned}\kappa([x,y],z)&=\mathrm{Tr}(\mathrm{ad}([x,y])\mathrm{ad}(z))\\&=\mathrm{Tr}([\mathrm{ad}(x),\mathrm{ad}(y)]\mathrm{ad}(z))\\&=\mathrm{Tr}(\mathrm{ad}(x)[\mathrm{ad}(y),\mathrm{ad}(z)])\\&=\mathrm{Tr}(\mathrm{ad}(x)\mathrm{ad}([y,z]))\\&=\kappa(x,[y,z])\end{aligned} Where we have used the trace identity from last time. This is called the Killing form, named for Wilhelm Killing and not nearly so coincidentally as the Poynting vector. It will be very useful to study the structures of Lie algebras. First, though, we want to show that the definition is well-behaved. Specifically, if $I\subseteq L$ is an ideal, then we can define $\kappa_I$ to be the Killing form of $I$. It turns out that $\kappa_I$ is just the same as $\kappa$, but restricted to take its arguments in $I$ instead of all of $L$. A lemma: if $W\subseteq V$ is any subspace of a vector space and $\phi:V\to V$ has its image contained in $W$, then the trace of $\phi$ over $V$ is the same as its trace over $W$. Indeed, take any basis of $W$ and extend it to one of $V$; the matrix of $\phi$ with respect to this basis has zeroes for all the rows that do not correspond to the basis of $W$, so the trace may as well just be taken over $W$. Now the fact that $I$ is an ideal means that for any $x,y\in I$ the mapping $\mathrm{ad}(x)\mathrm{ad}(y)$ is an endomorphism of $L$ sending all of $L$ into $I$. Thus its trace over $I$ is the same as its trace over all of $L$, and the Killing form on $I$ applied to $x,y\in I$ is the same as the Killing form on $L$ applied to the same two elements. September 3, 2012 Posted by | Algebra, Lie Algebras | 5 Comments ## A Trace Criterion for Nilpotence We’re going to need another way of identifying nilpotent endomorphisms. Let $A\subseteq B\subseteq\mathfrak{gl}(V)$ be two subspaces of endomorphisms on a finite-dimensional space $V$, and let $M$ be the collection of $x\in\mathfrak{gl}(V)$ such that $\mathrm{ad}(x)$ sends $B$ into $A$. If $x\in M$ satisfies $\mathrm{Tr}(xy)=0$ for all $y\in M$ then $x$ is nilpotent. The first thing we do is take the Jordan-Chevalley decomposition of $x$$x=s+n$ — and fix a basis that diagonalizes $x$ with eigenvalues $a_i$. We define $E$ to be the $\mathbb{Q}$-subspace of $\mathbb{F}$ spanned by the eigenvalues. If we can prove that this space is trivial, then all the eigenvalues of $s$ must be zero, and thus $s$ itself must be zero. We proceed by showing that any linear functional $f:E\to\mathbb{Q}$ must be zero. Taking one, we define $y\in\mathfrak{gl}(V)$ to be the endomorphism whose matrix with respect to our fixed basis is diagonal: $f(a_i)\delta_{ij}$. If $\{e_{ij}\}$ is the corresponding basis of $\mathfrak{gl}(V)$ we can calculate that \displaystyle\begin{aligned}\left[\mathrm{ad}(s)\right](e_{ij})&=(a_i-a_j)e_{ij}\\\left[\mathrm{ad}(y)\right](e_{ij})&=(f(a_i)-f(a_j))e_{ij}\end{aligned} Now we can find some polynomial $r(T)$ such that $r(a_i-a_j)=f(a_i)-f(a_j)$; there is no ambiguity here since if $a_i-a_j=a_k-a_l$ then the linearity of $f$ implies that \displaystyle\begin{aligned}f(a_i)-f(a_j)&=f(a_i-a_j)\\&=f(a_k-a_l)\\&=f(a_k)-f(a_l)\end{aligned} Further, picking $i=j$ we can see that $r(0)=0$, so $r$ has no constant term. It should be apparent that $\mathrm{ad}(y)=r\left(\mathrm{ad}(s)\right)$. Now, we know that $\mathrm{ad}(s)$ is the semisimple part of $\mathrm{ad}(x)$, so the Jordan-Chevalley decomposition lets us write it as a polynomial in $\mathrm{ad}(x)$ with no constant term. But then we can write $\mathrm{ad}(y)=r\left(p\left(\mathrm{ad}(x)\right)\right)$. Since $\mathrm{ad}(x)$ maps $B$ into $A$, so does $\mathrm{ad}(y)$, and our hypothesis tells us that $\displaystyle\mathrm{Tr}(xy)=\sum\limits_{i=1}^{\dim V}a_if(a_i)=0$ Hitting this with $f$ we find that the sum of the squares of the $f(a_i)$ is also zero, but since these are rational numbers they must all be zero. Thus, as we asserted, the only possible $\mathbb{Q}$-linear functional on $E$ is zero, meaning that $E$ is trivial, all the eigenvalues of $s$ are zero, and $x$ is nipotent, as asserted. August 31, 2012 ## Uses of the Jordan-Chevalley Decomposition Now that we’ve given the proof, we want to mention a few uses of the Jordan-Chevalley decomposition. First, we let $A$ be any finite-dimensional $\mathbb{F}$-algebra — associative, Lie, whatever — and remember that $\mathrm{End}_\mathbb{F}(A)$ contains the Lie algebra of derivations $\mathrm{Der}(A)$. I say that if $\delta\in\mathrm{Der}(A)$ then so are its semisimple part $\sigma$ and its nilpotent part $\nu$; it’s enough to show that $\sigma$ is. Just like we decomposed $V$ in the proof of the Jordan-Chevalley decomposition, we can break $A$ down into the eigenspaces of $\delta$ — or, equivalently, of $\sigma$. But this time we will index them by the eigenvalue: $A_a$ consists of those $x\in A$ such that $\left[\delta-aI\right]^k(x)=0$ for sufficiently large $k$. Now we have the identity: $\displaystyle\left[\delta-(a+b)I\right]^n(xy)=\sum\limits_{i=0}^n\binom{n}{i}\left[\delta-aI\right]^{n-i}(x)\left[\delta-bI\right]^i(y)$ which is easily verified. If a sufficiently large power of $\delta-aI$ applied to $x$ and a sufficiently large power of $\delta-bI$ applied to $y$ are both zero, then for sufficiently large $n$ one or the other factor in each term will be zero, and so the entire sum is zero. Thus we verify that $A_aA_b\subseteq A_{a+b}$. If we take $x\in A_a$ and $y\in A_b$ then $xy\in A_{a+b}$, and thus $\sigma(xy)=(a+b)xy$. On the other hand, \displaystyle\begin{aligned}\sigma(x)y+x\sigma(y)&=axy+bxy\\&=(a+b)xy\end{aligned} And thus $\sigma$ satisfies the derivation property $\displaystyle\sigma(xy)=\sigma(x)y+x\sigma(y)$ so $\sigma$ and $\nu$ are both in $\mathrm{Der}(A)$. For the other side we note that, just as the adjoint of a nilpotent endomorphism is nilpotent, the adjoint of a semisimple endomorphism is semisimple. Indeed, if $\{v_i\}_{i=0}^n$ is a basis of $V$ such that the matrix of $x$ is diagonal with eigenvalues $\{a_i\}$, then we let $e_{ij}$ be the standard basis element of $\mathfrak{gl}(n,\mathbb{F})$, which is isomorphic to $\mathfrak{gl}(V)$ using the basis $\{v_i\}$. It’s a straightforward calculation to verify that $\displaystyle\left[\mathrm{ad}(x)\right](e_{ij})=(a_i-a_j)e_{ij}$ and thus $\mathrm{ad}(x)$ is diagonal with respect to this basis. So now if $x=x_s+x_n$ is the Jordan-Chevalley decomposition of $x$, then $\mathrm{ad}(x_s)$ is semisimple and $\mathrm{ad}(x_n)$ is nilpotent. They commute, since \displaystyle\begin{aligned}\left[\mathrm{ad}(x_s),\mathrm{ad}(x_n)\right]&=\mathrm{ad}\left([x_s,x_n]\right)\\&=\mathrm{ad}(0)=0\end{aligned} Since $\mathrm{ad}(x)=\mathrm{ad}(x_s)+\mathrm{ad}(x_n)$ is the decomposition of $\mathrm{ad}(x)$ into a semisimple and a nilpotent part which commute with each other, it is the Jordan-Chevalley decomposition of $\mathrm{ad}(x)$. August 30, 2012 ## The Jordan-Chevalley Decomposition We recall that any linear endomorphism of a finite-dimensional vector space over an algebraically closed field can be put into Jordan normal form: we can find a basis such that its matrix is the sum of blocks that look like $\displaystyle\begin{pmatrix}\lambda&1&&&{0}\\&\lambda&1&&\\&&\ddots&\ddots&\\&&&\lambda&1\\{0}&&&&\lambda\end{pmatrix}$ where $\lambda$ is some eigenvalue of the transformation. We want a slightly more abstract version of this, and it hinges on the idea that matrices in Jordan normal form have an obvious diagonal part, and a bunch of entries just above the diagonal. This off-diagonal part is all in the upper-triangle, so it is nilpotent; the diagonalizable part we call “semisimple”. And what makes this particular decomposition special is that the two parts commute. Indeed, the block-diagonal form means we can carry out the multiplication block-by-block, and in each block one factor is a constant multiple of the identity, which clearly commutes with everything. More generally, we will have the Jordan-Chevalley decomposition of an endomorphism: any $x\in\mathrm{End}(V)$ can be written uniquely as the sum $x=x_s+x_n$, where $x_s$ is semisimple — diagonalizable — and $x_n$ is nilpotent, and where $x_s$ and $x_n$ commute with each other. Further, we will find that there are polynomials $p(T)$ and $q(T)$ — each of which with no constant term — such that $p(x)=x_s$ and $q(x)=x_n$. And thus we will find that any endomorphism that commutes with $x$ with also commute with both $x_s$ and $x_n$. Finally, if $A\subseteq B\subseteq V$ is any pair of subspaces such that $x:B\to A$ then the same is true of both $x_s$ and $x_n$. We will prove these next time, but let’s see that this is actually true of the Jordan normal form. The first part we’ve covered. For the second, set aside the assertion about $p$ and $q$; any endomorphism commuting with $x$ either multiplies each block by a constant or shuffles similar blocks, and both of these operations commute with both $x_n$ and $x_n$. For the last part, we may as well assume that $B=V$, since otherwise we can just restrict to $x\vert_B\in\mathrm{End}(B)$. If $\mathrm{Im}(x)\subseteq A$ then the Jordan normal form shows us that any complementary subspace to $A$ must be spanned by blocks with eigenvalue $0$. In particular, it can only touch the last row of any such block. But none of these rows are in the range of either the diagonal or off-diagonal portions of the matrix. August 28, 2012 Posted by | Algebra, Linear Algebra | 3 Comments ## Flags We’d like to have matrix-oriented versions of Engel’s theorem and Lie’s theorem, and to do that we’ll need flags. I’ve actually referred to flags long, long ago, but we’d better go through them now. In its simplest form, a flag is simply a strictly-increasing sequence of subspaces $\{V_k\}_{k=0}^n$ of a given finite-dimensional vector space. And we almost always say that a flag starts with $V_0=0$ and ends with $V_n=V$. In the middle we have some other subspaces, each one strictly including the one below it. We say that a flag is “complete” if $\dim(V_k)=k$ — and thus $n=\dim(V)$ — and for our current purposes all flags will be complete unless otherwise mentioned. The useful thing about flags is that they’re a little more general and “geometric” than ordered bases. Indeed, given an ordered basis $\{e_k\}_{k=1}^n$ we have a flag on $V$: define $V_k$ to be the span of $\{e_i\}_{i=1}^k$. As a partial converse, given any (complete) flag we can come up with a not-at-all-unique basis: at each step let $e_k$ be the preimage in $V_k$ of some nonzero vector in the one-dimensional space $V_k/V_{k-1}$. We say that an endomorphism of $V$ “stabilizes” a flag if it sends each $V_k$ back into itself. In fact, we saw something like this in the proof of Lie’s theorem: we build a complete flag on the subspace $W_n$, building the subspace up one basis element at a time, and then showed that each $k\in K$ stabilized that flag. More generally, we say a collection of endomorphisms stabilizes a flag if all the endomorphisms in the collection do. So, what do Lie’s and Engel’s theorems tell us about flags? Well, Lie’s theorem tells us that if $L\subseteq\mathfrak{gl}(V)$ is solvable then it stabilizes some flag in $V$. Equivalently, there is some basis with respect to which the matrices of all elements of $L$ are upper-triangular. In other words, $L$ is isomorphic to some subalgebra of $\mathfrak{t}(\dim(V),\mathbb{F})$. We see that not only is $\mathfrak{t}(n,\mathbb{F})$ solvable, it is in a sense the archetypal solvable Lie algebra. The proof is straightforward: Lie’s theorem tells us that $L$ has a common eigenvector $v_1\in V$. We let this span the one-dimensional subspace $V_1$ and consider the action of $L$ on the quotient $W_1=V/V_1$. Since we know that the image of $L$ in $\mathfrak{gl}(W_1)$ will again be solvable, we get a common eigenvector $w_2\in W_1$. Choosing a pre-image $v_2\in V$ with $w_2=v_2+\mathbb{F}v_1$ we get our second basis vector. We can continue like this, building up a basis of $V$ such that at each step we can write $l(v_k)\in\lambda_k(l)v_k+V_{k-1}$ for all $l\in L$ and some $\lambda_k\in L^*$. For nilpotent $L$, the same is true — of course, nilpotent Lie algebras are automatically solvable — but Engel’s theorem tells us more: the functional $\lambda$ must be zero, and the diagonal entries of the above matrices are all zero. We conclude that any nilpotent $L$ is isomorphic to some subalgebra of $\mathfrak{n}(\dim(V),\mathbb{F})$. That is, not only is $\mathfrak{n}(n,\mathbb{F})$ nilpotent, it is the archetype of all nilpotent Lie algebras in just the same way as $\mathfrak{t}(n,\mathbb{F})$ is the archetypal solvable Lie algebra. More generally, if $L$ is any solvable (nilpotent) Lie algebra and $\phi:L\to\mathfrak{gl}(V)$ is any finite-dimensional representation of $L$, then we know that the image $\phi(L)$ is a solvable (nilpotent) linear Lie algebra acting on $V$, and thus it must stabilize some flag of $V$. As a particular example, consider the adjoint action $\mathrm{ad}:L\to\mathfrak{gl}(L)$; a subspace of $L$ invariant under the adjoint action of $L$ is just the same thing as an ideal of $L$, so we find that there must be some chain of ideals: $\displaystyle 0=I_0\subseteq I_1\subseteq\dots\subseteq I_{n-1}\subseteq I_n=L$ where $\dim(I_k)=k$. Given such a chain, we can of course find a basis of $L$ with respect to which the matrices of the adjoint action are all in $\mathfrak{t}(\dim(L),\mathbb{F})$ ($\mathfrak{n}(\dim(L),\mathbb{F})$). In either case, we find that $[L,L]$ is nilpotent. Indeed, if $L$ is already nilpotent this is trivial. But if $L$ is merely solvable, we see that the matrices of the commutators $[\mathrm{ad}(x),\mathrm{ad}(y)]$ for $x,y\in L$ lie in $\displaystyle [\mathfrak{t}(\dim(L),\mathbb{F}),\mathfrak{t}(\dim(L),\mathbb{F})]=\mathfrak{n}(\dim(L),\mathbb{F})$ But since $\mathrm{ad}$ is a homomorphism, this is the matrix of $\mathrm{ad}([x,y])$ acting on $L$, and obviously its action on the subalgebra $[L,L]$ is nilpotent as well. Thus each element of $[L,L]$ is ad-nilpotent, and Engel’s theorem then tells us that $[L,L]$ is a nilpotent Lie algebra. ## Lie’s Theorem The lemma leading to Engel’s theorem boils down to the assertion that there is some common eigenvector for all the endomorphisms in a nilpotent linear Lie algebra $L\subseteq\mathfrak{gl}(V)$ on a finite-dimensional nonzero vector space $V$. Lie’s theorem says that the same is true of solvable linear Lie algebras. Of course, in the nilpotent case the only possible eigenvalue was zero, so we may find things a little more complicated now. We will, however, have to assume that $\mathbb{F}$ is algebraically closed and that no multiple of the unit in $\mathbb{F}$ is zero. We will proceed by induction on the dimension of $L$ using the same four basic steps as in the lemma: find an ideal $K\subseteq L$ of codimension one, so we can write $L=K+\mathbb{F}z$ for some $z\in K\setminus L$; find common eigenvectors for $K$; find a subspace of such common eigenvectors stabilized by $L$; find in that space an eigenvector for $z$. First, solvability says that $L$ properly includes $[L,L]$, or else the derived series wouldn’t be able to even start heading towards $0$. The quotient $L/[L,L]$ must be abelian, with all brackets zero, so we can pick any subspace of this quotient with codimension one and it will be an ideal. The preimage of this subspace under the quotient projection will then be an ideal $K\subseteq L$ of codimension one. Now, $K$ is a subalgebra of $L$, so we know it’s also solvable, so induction tells us that there’s a common eigenvector $v\in V$ for the action of $K$. If $K$ is zero, then $L$ must be one-dimensional abelian, in which case the proof is obvious. Otherwise there is some linear functional $\lambda\in K^*$ defined by $\displaystyle k(v)=\lambda(k)v$ Of course, $v$ is not the only such eigenvector; we define the (nonzero) subspace $W$ by $\displaystyle W=\{w\in V\vert\forall k\in K, k(w)=\lambda(k)w\}$ Next we must show that $L$ sends $W$ back into itself. To see this, pick $l\in L$ and $k\in K$ and check that \displaystyle\begin{aligned}k(l(w))&=l(k(w))-[l,k](w)\\&=l(\lambda(k)w)-\lambda([l,k])w\\&=\lambda(k)l(w)-\lambda([l,k])w\end{aligned} But if $l(w)\in W$, then we’d have $k(l(w))=\lambda(k)l(w)$; we need to verify that $\lambda([l,k])=0$. In the nilpotent case — Engel’s theorem — the functional $\lambda$ was constantly zero, so this was easy, but it’s a bit harder here. Fixing $w\in W$ and $l\in L$ we pick $n$ to be the first index where the collection $\{l^i(w)\}_{i=0}^n$ is linearly independent — the first one where we can express $l^n(w)$ as the linear combination of all the previous $l^i(w)$. If we write $W_i$ for the subspace spanned by the first $i$ of these vectors, then the dimension of $W_i$ grows one-by-one until we get to $\dim(W_n)=n$, and $W_{n+i}=W_n$ from then on. I say that each of the $W_i$ are invariant under each $k\in K$. Indeed, we can prove the congruence $\displaystyle k(l^i(w))\equiv\lambda(k)l^i(w)\quad\mod W_i$ that is, $k$ acts on $l^i(w)$ by multiplication by $\lambda(k)$, plus some “lower-order terms”. For $i=0$ this is the definition of $\lambda$; in general we have \displaystyle\begin{aligned}k(l^i(w))&=k(l(l^{i-1}(w)))\\&=l(k(l^{i-1}(w)))-[l,k](l^{i-1}(w))\\&=\lambda(k)l^i(w)+l(w')-\lambda([l,k])l^{i-1}(w)-w''\end{aligned} for some $w',w''\in W_{i-1}$. And so we conclude that, using the obvious basis of $W_n$ the action of $k$ on this subspace is in the form of an upper-triangular matrix with $\lambda(k)$ down the diagonal. The trace of this matrix is $n\lambda(k)$. And in particular, the trace of the action of $[l,k]$ on $W_n$ is $n\lambda([l,k])$. But $l$ and $k$ both act as endomorphisms of $W_n$ — the one by design and the other by the above proof — and the trace of any commutator is zero! Since $n$ must have an inverse we conclude that $\lambda([l,k])=0$. Okay so that checks out that the action of $L$ sends $W$ back into itself. We finish up by picking some eigenvector $v_0\in W$ of $z$, which we know must exist because we’re working over an algebraically closed field. Incidentally, we can then extend $\lambda$ to all of $L$ by using $z(v_0)=\lambda(z)v_0$. August 25, 2012 Posted by | Algebra, Lie Algebras | 1 Comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 406, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642079472541809, "perplexity": 183.97101013252706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320593.91/warc/CC-MAIN-20170625221343-20170626001343-00384.warc.gz"}
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumental_Analysis/Microscopy/Dynamic_Light_Scattering
Dynamic Light Scattering The correlation function for a system experiencing Brownian motion $$G(t)$$ decays exponentially with decay constant $$\Gamma$$. $G(t)=e^{-\Gamma t}$ $$\Gamma$$ is related to the diffusivity of the particle by $\Gamma=-Dq^{^{2}}$ where $q=\frac{4\pi n }{\lambda}\sin(\dfrac{\Theta }{2})$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890303015708923, "perplexity": 1257.9752757767435}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00242.warc.gz"}
https://aliquote.org/post/poisson-approximations/
# aliquote ## < a quantity that can be divided into another a whole number of time /> We know from basic statistics textbook that the distribution of a Binomial random variate (with probbaility of success $p$) can be approximated using a Poisson distribution (of parameter $\lambda = np$), provided certain conditions are met (usually, small $p$ and large $n$).1 An easy to remember application is that the sequence of $\text{Bin}(n,\frac{1}{n})$ distributions converges in law to the Poisson distribution with mean 1. We can see $\text{Bin}(n,\frac{1}{n})$ either as the sum of $n$ independent Bernoulli trials with small probability of success, dependent on $n$, or as the count of the total number of occurrences among $n$ independent rare events. It turns out that the later has many useful applications. Here are two illustrations, taken from DasGupta.2 In the matching problem, cards are drawn one at a time from a well-shuffled deck containing $N$ cards, and a match occurs if the card bearing the number $j$ is drawn at precisely the $j$-th draw from the deck. Let $S_N$ be the total number of matches. We will need a little theorem, which happens to be useful when we want to prove that a Poisson limit is still applicable for the sum of dependent Bernoulli trials Theorem: For $N\ge 1$, let $X_i=1,2,\dots,n$, $n=n(N)$ be a triangular array of Bernoulli random variables, and let $A_i$ denotes the event for which $X_i=1$. For a given $k$, let $M_k$ be the $k$-th binomial moment of $S_n$; i.e., $M_k=\sum_{j=k}^n{j\choose k}P(S_n=j)$. If there exists $0<\lambda<\infty$ such that, for every fixed $k$, $M_k\rightarrow \frac{\lambda^k}{k!}$ as $N\rightarrow\infty$, then $S_n \rightarrow_{\mathcal{L}}\text{Poi}(\lambda)$. In the matching problem, the binomial moment $M_k$ can be shown to be $M_k = {N\choose k}\frac{1}{N(N-1)\dots (N-k+1)}$. Using Stirling’s approximation, for every fixed $k$, $M_k\rightarrow\frac{1}{k!}$; in other words, the total number of matches converges to a Poisson distribution with mean 1 as the deck size $N\rightarrow\infty$. Convergence is very fast. See also More about the matching problem.3 In the birthday problem, we are interested in the probability that two randomly chosen persons were born the same day. More formally, suppose each person in a group of $n$ people has, mutually independently, a probability $\frac{1}{N}$ of being born on any given day of a year with $N$ calendar days. Let $S_n$ be the total number of pairs of people $(i, j)$ such that they have the same birthday. Then $P(S_n > 0)$ is the probability that there is at least one pair of people in the group who share the same birthday. It turns out that if $n$ and $N$ can be expressed as $n^2=N\lambda+\mathcal{o}(N)$, for some $0<\lambda <\infty$, then $S_n\rightarrow_\mathcal{L}\text{Poi}(\lambda)$. If $N=365$, $n=30$, then $S_n\approx\text{Poi}(1.233)$. 1. LeCam’s theorem on total variation is also useful. It states, in part, that $d_\text{TV}\left(\text{Bin}(n,\lambda/n),\text{Poi}(\lambda)\right)\le 8\lambda/n$. Further discussion can be found on maths.SE. ↩︎ 2. DasGupta, A. Asymptotic theory of statistics and probability. Springer, 2008. ↩︎ 3. Interestingly, it is possible to derive a recurrence relation for the PDF of $S_n$: (proof here) $$\begin{array}{rcl} \Pr(S_1 = 1) &=& 1\cr \Pr(S_n = k) &=& (k+1)\Pr(S_{n+1} = k+1)\quad \text{for}\: k \in \{0,1,\dots,n\} \end{array}$$ This allows to obtain the probability density function of $S_n$ recursively for any $n$. ↩︎
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948924779891968, "perplexity": 113.14521722739266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00619.warc.gz"}
https://www.sarthaks.com/2750362/the-average-consecutive-numbers-is-30-find-the-sum-between-the-smallest-and-largest-number?show=2750363
# The average of 5 consecutive numbers is 30. Find the sum between the smallest and largest number. 69 views in Aptitude closed The average of 5 consecutive numbers is 30. Find the sum between the smallest and largest number. 1. 40 2. 60 3. 58 4. 52 by (53.1k points) selected Correct Answer - Option 2 : 60 Given Average of 5 consecutive numbers = 30 Formula Used Average = Sum of observations/Number of observations Calculation Let the 4 consecutive numbers be x, (x + 1), (x + 2), (x + 3) and (x + 4) So, [x + (x + 1) + (x + 2) + (x + 3) + (x + 4)]/5 = 30 ⇒ 5x + 10 = 150 ⇒ x = 140/5 = 28 Largest number = x + 4 = 28 + 4 = 32 Sum = (x) + (x + 4) = 28 + 32 = 60 ∴ 60 is the sum of the given numbers
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290854096412659, "perplexity": 452.90437227726056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00097.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2012_AMC_12B_Problems/Problem_24&direction=prev&oldid=50546
# 2012 AMC 12B Problems/Problem 24 ## Problem 24 Define the function on the positive integers by setting and if is the prime factorization of , then For every , let . For how many in the range is the sequence unbounded? Note: A sequence of positive numbers is unbounded if for every integer , there is a member of the sequence greater than . ## Solution First of all, notice that for any odd prime , the largest prime that divides is no larger than , therefore eventually the factorization of does not contain any prime larger than . Also, note that , when it stays the same but when it grows indefinitely. Therefore any number that is divisible by or any number such that is divisible by makes the sequence unbounded. There are multiples of within . also works: . Now let's look at the other cases. Any first power of prime in a prime factorization will not contribute the unboundedness because . At least a square of prime is to contribute. So we test primes that are less than : works, therefore any number that are divisible by works: there are of them. could also work if satisfies , but . does not work. works. There are no other multiples of within . could also work if , but already. For number that are only divisible by , they don't work because none of these primes are such that could be a multiple of nor a multiple of . In conclusion, there are number of 's ... .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646651148796082, "perplexity": 388.81242767381315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00137.warc.gz"}
https://tfetimes.com/c-euler-method/
# C++: Euler Method Posted in C++ Euler’s method numerically approximates solutions of first-order ordinary differential equations (ODEs) with a given initial value. It is an explicit method for solving initial value problems (IVPs), as described in the wikipedia page. The ODE has to be provided in the following form: $\frac{dy(t)}{dt} = f(t,y(t))$ with an initial value y(t0) = y0 To get a numeric solution, we replace the derivative on the LHS with a finite difference approximation: $\frac{dy(t)}{dt} \approx \frac{y(t+h)-y(t)}{h}$ then solve for y(t + h): $y(t+h) \approx y(t) + h \, \frac{dy(t)}{dt}$ which is the same as $y(t+h) \approx y(t) + h \, f(t,y(t))$ The iterative solution rule is then: $y_{n+1} = y_n + h \, f(t_n, y_n)$ h is the step size, the most relevant parameter for accuracy of the solution. A smaller step size increases accuracy but also the computation cost, so it has always has to be hand-picked according to the problem at hand. Example: Newton’s Cooling Law Newton’s cooling law describes how an object of initial temperature T(t0) = T0 cools down in an environment of temperature TR: $\frac{dT(t)}{dt} = -k \, \Delta T$ or $\frac{dT(t)}{dt} = -k \, (T(t) - T_R)$ It says that the cooling rate $\frac{dT(t)}{dt}$ of the object is proportional to the current temperature difference ΔT = (T(t) − TR) to the surrounding environment. The analytical solution, which we will compare to the numerical approximation, is $T(t) = T_R + (T_0 - T_R) \; e^{-k t}$ The task is to implement a routine of Euler’s method and then to use it to solve the given example of Newton’s cooling law with it for three different step sizes of 2 s, 5 s and 10 s and to compare with the analytical solution. The initial temperature T0 shall be 100 °C, the room temperature TR 20 °C, and the cooling constant k 0.07. The time interval to calculate shall be from 0 s to 100 s. #include <iomanip> #include <iostream> typedef double F(double,double); /* Approximates y(t) in y'(t)=f(t,y) with y(a)=y0 and t=a..b and the step size h. */ void euler(F f, double y0, double a, double b, double h) { double y = y0; for (double t = a; t < b; t += h) { std::cout << std::fixed << std::setprecision(3) << t << " " << y << "\n"; y += h * f(t, y); } std::cout << "done\n"; } // Example: Newton's cooling law double newtonCoolingLaw(double, double t) { return -0.07 * (t - 20); } int main() { euler(newtonCoolingLaw, 100, 0, 100, 2); euler(newtonCoolingLaw, 100, 0, 100, 5); euler(newtonCoolingLaw, 100, 0, 100, 10); } Last part of output: ... 0.000 100.000 10.000 44.000 20.000 27.200 30.000 22.160 40.000 20.648 50.000 20.194 60.000 20.058 70.000 20.017 80.000 20.005 90.000 20.002 done SOURCE Content is available under GNU Free Documentation License 1.2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877156376838684, "perplexity": 1299.2013437007506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00103.warc.gz"}
http://nrich.maths.org/4322/solution
### Golden Thoughts Rectangle PQRS has X and Y on the edges. Triangles PQY, YRX and XSP have equal areas. Prove X and Y divide the sides of PQRS in the golden ratio. ### From All Corners Straight lines are drawn from each corner of a square to the mid points of the opposite sides. Express the area of the octagon that is formed at the centre as a fraction of the area of the square. ### Star Gazing Find the ratio of the outer shaded area to the inner area for a six pointed star and an eight pointed star. # Triangle in a Triangle ##### Stage: 4 Challenge Level: On the diagram the points that divide each of the sides into equal thirds can be marked. The lines connecting these points to the nearby vertex of the yellow triangle can also be drawn. This gives the following diagram: In this diagram, $AE=EG=GC$, $AH=HF=FB$ and $CD=DI=IB$, since the points trisect the sides. Now, triangle $AHE$ is an enlargement of $ABC$ by scale factor $\tfrac{1}{3}$, as $AE=\tfrac{1}{3}AC$, $AH=\tfrac{1}{3}AB$ and $\angle EAH = \angle CAB$. This means the area of $EAH$ is $\left( \tfrac{1}{3} \right)^2 = \tfrac{1}{9}$ of the area of $ABC$. Since $HF = AH$, $EAH$ and $EHF$ have the same base length and the same perpendicular height (that of $E$ above $AB$), they have the same area: $\tfrac {1}{9}$ of the total area of $ABC$. This process can be repeated at vertices $B$ and $C$, so each of the six orange triangles all have area $\tfrac{1}{9}$ of that of $ABC$. Therefore the yellow area is $1-6\times \tfrac{1}{9} = \tfrac{1}{3}$ of the area of the whole triangle. Steven, from Sunderland College, sent us a solution which used the sine rule for area instead. You can see his solution here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877785325050354, "perplexity": 357.95951586819416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608058.57/warc/CC-MAIN-20170525102240-20170525122240-00457.warc.gz"}
https://repository.uantwerpen.be/link/irua/101840
Publication Title Single-file diffusion in periodic energy landscapes : the role of hydrodynamic interactions Author Abstract We report on the dynamical properties of interacting colloids confined to one dimension and subjected to external periodic energy landscapes. We particularly focus on the influence of hydrodynamic interactions on the mean-square displacement. Using Brownian dynamics simulations, we study colloidal systems with two types of repulsive interparticle interactions, namely, Yukawa and superparamagnetic potentials. We find that in the homogeneous case, hydrodynamic interactions lead to an enhancement of the particle mobility and the mean-square displacement at long times scales as t(alpha), with alpha = 1/2 + epsilon and epsilon being a small correction. This correction, however, becomes much more important in the presence of an external field, which breaks the homogeneity of the particle distribution along the line and, therefore, promotes a richer dynamical scenario due to the hydrodynamical coupling among particles. We provide here the complete dynamical scenario in terms of the external potential parameters: amplitude and commensurability. Language English Source (journal) Physical review : E : statistical, nonlinear, and soft matter physics / American Physical Society. - Melville, N.Y., 2001 - 2015 Publication Melville, N.Y. : American Physical Society, 2012 ISSN 1539-3755 [print] 1550-2376 [online] Volume/pages 86:3Part 1(2012), 10 p. Article Reference 031123 ISI 000308873500002 Medium E-only publicatie Full text (Publisher's DOI) Full text (open access) UAntwerpen Faculty/Department Research group [E?say:metaLocaldata.cgzprojectinf] Publication type Subject Affiliation Publications with a UAntwerp address
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193613290786743, "perplexity": 2890.010288426259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00640.warc.gz"}
http://dataspace.princeton.edu/jspui/handle/88435/dsp01cj82k976w
Title: Star/Galaxy Separation in Hyper Suprime-Cam and Mapping the Milky Way with Star Counts Authors: Garmilla, Jose Antonio Advisors: Strauss, MichaelLupton, Robert Contributors: Astrophysical Sciences Department Keywords: ClassificationGalaxyHSCSeparationStarXD Subjects: AstronomyAstrophysics Issue Date: 2016 Publisher: Princeton, NJ : Princeton University Abstract: We study the problem of separating stars and galaxies in the Hyper Suprime-Cam (HSC) multi-band imaging data at high galactic latitudes. We show that the current separation technique implemented in the HSC pipeline is unable to produce samples of stars with $i \gtrsim 24$ without a significant contamination from galaxies ($\gtrsim 50\%$). We study various methods for measuring extendedness in HSC with simulated and real data and find that there are a number of available techniques that give nearly optimal results; the extendedness measure HSC is currently using is among these. We develop a star/galaxy separation method for HSC based on the Extreme Deconvolution (XD) algorithm that uses colors and extendedness simultaneously, and show that with it we can generate samples of faint stars keeping contamination from galaxies under control to $i \leq 25$. We apply our star/galaxy separation method to carry out a preliminary study of the structure of the Milky Way (MW) with main sequence (MS) stars using photometric parallax relations derived for the HSC photometric system. We show that it will be possible to generate a tomography of the MW stellar halo to galactocentric radii $\sim 100 \textrm{ kpc}$ with $\sim 10^6$ MS stars in the HSC Wide layer once the survey has been completed. We report two potential detections of the Sagittarius tidal stream with MS stars in the XMM and GAMA15 fields at $\approx 20 \textrm{ kpc}$ and $\approx 40 \textrm{ kpc}$ respectively. URI: http://arks.princeton.edu/ark:/88435/dsp01cj82k976w Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Astrophysical Sciences
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100955486297607, "perplexity": 3314.262926612583}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613796.37/warc/CC-MAIN-20170530051312-20170530071312-00179.warc.gz"}
https://www.physicsforums.com/threads/calculating-3db-frequency.154049/
Homework Help: Calculating 3dB frequency. 1. Jan 31, 2007 vg19 1. The problem statement, all variables and given/known data Using your results, calculate the 3dB frequency of the RC filter. 3. The attempt at a solution Basically in our lab, we made a simple series RC low pass filter. We put in a 16V peak to peak signal at varying frequencies below, and measured the output voltage at the capacitor (again peak to peak) on the scope. Now if I want to calc the 3dB frequency from my results, would I plot this on a graph and look for the 3dB point? I have tried this, but I think I am going wrong somewhere. The second table would be what Im trying to plot. I took first column is just the log of the frequency. The second column is 10log(Vout/Vin). It just doesnt seem right. fin (Hz) Vo (V) 50 13.8 100 11.4 150 9.6 200 8.2 250 7.2 300 6.4 1.698970004 -0.749974923 2.000000000 -1.579527115 2.176091259 -2.326214759 3.301029996 -3.010299957 2.397940009 -3.665315444 2.477121255 -4.087127349 sorry for the formatting! 2. Feb 1, 2007 AlephZero It looks OK to me, except log 200 is not 3.301029996 3. Feb 1, 2007 Staff: Mentor I don't understand why you are taking the log of the frequency, but whatever. Keep in mind that the 3dB concept in this context is for the voltage transfer function plot (not power), so you should use 20log(), not 10log(). The 3dB point is basically where the output voltage amplitude is down by SQRT(2) compared to the input amplitude. I see one frequency on your first list that is darned close to this number.... 4. Feb 1, 2007 doodle Firstly, you don't really have to log the frequencies. Secondly, 3dB below 16V is 16 times 10^(-3/20) which is 11.327V. Looking at the table of figures you obtained, I would say that this is roughly around the 100Hz mark.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137425184249878, "perplexity": 1197.32280528139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867416.82/warc/CC-MAIN-20180526092847-20180526112847-00578.warc.gz"}
https://www.physicsforums.com/threads/harmonic-oscillator.40870/
# Harmonic oscillator 1. Aug 28, 2004 ### kurious If the universe oscillates between a Big Bang and a Big crunch, can two small volumes of dark energy, at opposite ends of it, be considered to be undergoing simple harmonic oscillation? The potential energy of an oscillator could be given by G m1 m2 /r where m1 is the mass of a volume of dark energy, the mass of the universe is10^52 kg,r = 10^26 metres - the current size of the universe . Since the PE of a simple harmonic oscillator is given by PE = 1/2 k x^2, the force constant k becomes 10 ^ -37 m2 assuming r is also about equal to the maximum extension of the oscillator. using frequency of oscillator = ( k / m2 ) ^1/2, frequency = ( 10^ -37m2 / m2 )^ 1/2 = 10^ - 18.5 per second. In other words the universe could oscillate every 10 ^ 18.5 seconds - about its current age!! Quantizing this oscillator gives its potential energy changing in units of 10^-52 Joules.Could this be the energy of gravitons turning into dark energy as the universe expands and the potential energy of the oscillator increases? Can you help with the solution or looking for help too? Draft saved Draft deleted Similar Discussions: Harmonic oscillator
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888266682624817, "perplexity": 2214.0287872834315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00174-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/k_e3-semileptonic-decay.173237/
# K_e3 semileptonic decay 1. Jun 8, 2007 ### bgy 1. The problem statement, all variables and given/known data I have to calculate the decay rate of the following process: K^{+} ---> pion^{0}+e^{+}+neutrino 2. Relevant equations the differential decay rate for this process is 2(p*p_neutrino)(p*p_electron)-(p_neutrino*p_electron)(p^2) * diracdelta(p_kaon - p_pion - p_electron - p_neutrino) * [d^3p_pion/E_pion]*[d^3p_electron/E_electron]*[d^3p_neutrino/E_neutrino] , where p=p_kaon+p_pion 3. The attempt at a solution I think it would be better calculate in the rest frame of kaon....or not? So, the exercise is: I have to integrate this rate for the final state of the particles, but how it goes? thanks 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Can you offer guidance or do you also need help? Similar Discussions: K_e3 semileptonic decay
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459219574928284, "perplexity": 2751.4430489614338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696681.94/warc/CC-MAIN-20170926193955-20170926213955-00596.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/sun-sets-fully-disappearing-horizon-lie-beach-eyes-10-rm-cm-sand-immediately-jump-eyes-170-q2027509
The Sun sets, fully disappearing over the horizon as you lie on the beach, your eyes 10 {\rm cm} above the sand. You immediately jump up, your eyes now 170 {\rm cm} above the sand, and you can again see the top of the Sun. If you count the number of seconds ( = t) until the Sun fully disappears again, you can estimate the radius of the Earth. Use the known radius of the Earth to calculate the time
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718241095542908, "perplexity": 531.4368437843797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159654.65/warc/CC-MAIN-20160205193919-00075-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.flyingcoloursmaths.co.uk/blog/page/30/
# The Flying Colours Maths Blog: Latest posts ## Ask Uncle Colin: another vile limit Dear Uncle Colin, Apparently, you can use L'Hôpital's rule to find the limit of $\left(\tan(x)\right)^x$ as $x$ goes to 0 - but I can't see how! - Fractions Required, Example Given Excepted Hi, FREGE, and thanks for your question! As it stands, you can't use L'Hôpital - but you can ## Another of Alison’s Ace Puzzles, Revisited This is a guest post from @ImMisterAl, who prefers to remain anonymous in real life. It refers to the problem in this post: a semi-circle is inscribed in a 3-4-5 triangle as shown; find $X$. As with any mathematical problem, my first thought was to sort out exactly what I ## Ask Uncle Colin: Are normals… normal? Dear Uncle Colin, I don't understand why the normal gradient is the negative reciprocal of the tangent gradient. What's the logic there? -- Pythagoras Is Blinding You To What's Obvious Hi, PIBYTWO, and thanks for your message! My favourite way to think about perpendicular gradients is to imagine a line ## From Euclid to Cantor One of my favourite quotes is from Stefan Banach: "A good mathematician sees analogies between theorems. A great mathematician sees analogies between analogies." This post is clearly in the former camp. I'm fairly sure it's a trivial thing, but it's not something I'd noticed before. One of the first serious Dear Uncle Colin, When I have an angle in the second quadrant, I can find it just fine using $\cos^{-1}$ - but using $\sin^{-1}$ or $\tan^{-1}$ gives me an angle in the fourth quadrant. I don't understand why this is! -- I Need Verbose Explanations; Radians Seem Excellent Hi, INVERSE, ## A surprising overlap Every so often, my muggle side and mathematical side conflict, and this clip from @marksettle shows one of them. My toddler's train track is freaking me out right now. What is going on here?! pic.twitter.com/9o8bVWF5KO — marc blank-settle (@MarcSettle) April 6, 2016 My muggle side says "wait, what, how can ## Ask Uncle Colin: Perpendicular vectors Dear Uncle Colin, I'm struggling a bit with my C4 vectors. Most of it is fine, except when I have to find a point $P$ on a given line such that $\vec{AP}$ is perpendicular to the line, for some known $A$. How do I figure that out? -- Any Vector ## Wrong, But Useful: Episode 41 This month on Wrong, But Useful, @reflectivemaths and @icecolbeveridge are joined by @evelynjlamb, who is Evelyn Lamb in real life. She writes the Roots Of Unity column for Scientific American. We discuss: How Evelyn got into maths, into writing and into France Evelyn picks the numbers of the podcast: 339,613 ## Another of Alison’s Ace Puzzles A nice puzzle this week, via NRICH's magnificent @ajk44: a semicircle is inscribed in a 3-4-5 triangle as shown. Find $X$. I think it's a nice puzzle because Alison's way of doing it was entirely different to mine, but thankfully got the same answer. You might like to try it ## Ask Uncle Colin: Bridges, Donkeys and Triangles Dear Uncle Colin, I'm struggling to understand why, if you know a triangle has two sides the same, the base angles must be the same. Can you explain? -- I'm Struggling Over Some Coherent Explanation Leveraging Equal Sides Hi, ISOSCELES, and thanks for your message! There are several good proofs
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164200782775879, "perplexity": 1872.254353556933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317516.88/warc/CC-MAIN-20190822215308-20190823001308-00181.warc.gz"}
https://link.springer.com/article/10.1007%2Fs40747-019-0097-0
# Incremental SMC-based CNF control strategy considering magnetic ball suspension and inverted pendulum systems through cuckoo search-genetic optimization algorithm • H. Ebrahimi Mollabashi • A. H. Mazinan Open Access Original Article ## Abstract A kind of incremental sliding mode control (SMC) approach in connection with the well-known composite nonlinear feedback (CNF) control strategy is newly considered in this research to deal with the nonlinear magnetic ball suspension and inverted pendulum systems, as well. The incremental SMC approach is in fact proposed to handle the aforementioned underactuated systems under control, which have a lower number of actuators than degrees of freedom. Based on the outcomes of the investigation presented here, the small overshoot and short settling time of the system response are fulfilled. In fact, the proposed CNF control strategy comprises two parts: the first term assures the stability of the closed-loop nonlinear system and provides a fast convergence response. The second term reduces its overshoot. The genetic-cuckoo hybrid algorithm is designed to minimize tracking errors for the purpose of finding the most suitable sliding surface coefficients. Finally, the finite time stability for the closed-loop system is proved, theoretically. ## Keywords Incremental sliding mode control approach Composite nonlinear feedback control approach Cuckoo search-genetic optimization algorithm Magnetic ball suspension system Inverted pendulum system Finite time stability ## Introduction The system uncertainty or mismatch is considered as one of the most important challenges in the area of nonlinear systems by now. It is to note that the uncertainty can be observed in the system parameters or the external disturbances that apply to the system. One of the popular approaches to deal with the uncertainties is known as the SMC strategy [1]. The SMC has indicated acceptable results since 1970, and comprises two parts: in the first part stable surfaces (sliding surfaces) are designed and in the second part, the control law for the trajectory of the closed-loop system is designed to converge the sliding surfaces in a finite time. The obvious feature of the SMC is the rapid response of the system, which leads to high overshoot. There exist contradictions between these characteristics; therefore, a tradeoff should be considered. The CNF is an efficient and simple approach which is employed to improve transient performance (small overshoot and acceptable settling time) and overcome the contradiction of simultaneous achievement of the mentioned transient performances. The CNF strategy is a relatively new approach that consists of a linear and a nonlinear section. The linear section plays an acceptable role in the closed-loop system stabilization and fast response. The nonlinear part attempts to change the damping ratio and decrease the steady state error according to the definition of the nonlinear function and the settling time response. Recently, several types of research are established based on the CNF approach for the purpose of improving the performance of the closed-loop system [2, 3, 4, 5]. In [6], the CNF method is applied to synchronize the master/slave nonlinear systems with time-varying delays in chaotic systems with nonlinearities. In [7], for a particular type of vehicle suspension, a CNF with a band and a layer is used to reduce the chattering phenomenon. Then, the proportional-integral controller and intelligent algorithm have been used to improve the error situation and optimization. Combination of the CNF strategy with intelligent algorithms has been illustrated with acceptable results in recent years. In [8, 9, 10], the nonlinear system of level tank and electromagnetism suspension system has been described by Takagi–Sugeno (T–S) model then the stability of closed-loop system has been proved by the CNF strategy with the parallel distributed compensation and the LMI. In [11], the combination of the CNF with the SMC has been applied to a class of nonlinear systems. To the best knowledge of recent considerations, a few investigations are applied to the underactuated and the nonlinear systems through the CNF approach. Tracking and regulation problem for practical systems has experienced a sweeping change over 1 decade. This paper proposes the SMC based on CNF approach for tracking control of a nonlinear magnetic ball suspension system and stabilization of an inverted pendulum system. The final object in a magnetic ball suspension system is to move a mass in a space without physical contact by magnetic characteristics. It is widely used in magnetic trains, accelerometers, etc. [12]. These systems have high nonlinearity and instability in the open-loop situation. Therefore, stabilization and tracking of the system are one of the engineering challenges. Several methods have been proposed to design a suitable control for linear and nonlinear types of the magnetic ball suspension system. In addition, investigation of underactuated systems has rapidly expanded in recent years. The underactuated systems are characterized by the fact that they have fewer actuators than the degrees of freedom to be controlled. The inverted pendulum is an example of an underactuated system with two degrees of freedom [13]. In these systems, the pendulum should be kept upright, meanwhile the cart must even be at the center of the line. It should be possible to control the position of the cart and the pendulum angle only with one control signal input. In fact, this model is a single input and two output (SIMO) system. In this paper, the idea of the CNF controller to the inverted pendulum system and nonlinear magnetic ball suspension system has been extended by the SMC and GC algorithm [14, 15, 16, 17]. The cuckoo search (CS) is a global random interactive search algorithm inspired by nature. The basis of this algorithm is the combination of the behavior of a particular species of cuckoo birds with the behavior of flying levy birds [18, 19, 20, 21, 22]. The Cuckoo search is applied owing to the fact that it is a simple, fast and efficient algorithm, which uses only a single parameter for search. The elimination of the genetic algorithm difficulty and providing global results are the main advantages of the cuckoo search algorithm; also, it does not trap in local optima and represents the proper coefficients for the sliding surfaces. Finally, it is theoretically proved that the trajectory of the closed-loop system converges to the sliding surface in a finite time manner in these cases. The rest of the paper is organized as follows: in the next section, the formulation and preliminary concerning the incremental SMC-based CNF strategy is first studied and subsequently the genetic-cuckoo (GC) algorithm has been introduced to minimize tracking errors for the purpose of finding the suitable sliding surface coefficients. In following section, the main results regarding this research including the stability of the closed-loop system for magnetic ball suspension and inverted pendulum systems are proposed. In the section before the conclusion, the simulation results are carried out and finally, in last section, concluding remarks are provided. ## The formulation and preliminary The formulation and preliminary of the CNF in connection with the SMC control strategy with its application to the magnetic ball suspension system tracking and stabilization as well as the inverted pendulum system has been now presented. The SMC is in fact designed to stabilize the closed-loop system and provides the fast system response convergence, high overshoot and long settling time; meanwhile, the main objective of the aforementioned CNF is to reduce the settling time and eliminate the overshoot corresponding to the SMC fast response. Now, the cuckoo search algorithm has been used to set the parameters, to reach the optimum condition. The rapid response is the obvious feature of the SMC, which leads to high overshoot. To solve this problem, the overall proposed controller is proposed as follows by the combination of the SMC and the CNF approach: \begin{aligned} &U_{T} = u_{\text{SMC}} + u_{\text{CNF}} , \hfill \\ &u_{\text{SMC}} = u_{\text{eq}} + u_{\text{sw}} , \hfill \\ \end{aligned} (1) where UCNF and Ueq are the CNF controller and equivalent law for state variables, respectively. The equivalent law is not enough to guarantee rapid convergence of state variables to the sliding surface. Usw is the switch control law for the sliding surface which provides a smooth control signal to remove chattering. Figure 1 illustrates the total control structure. ### The SMC-based CNF approach for the magnetic ball suspension system This section of the research proposes the SMC-based CNF approach to deal with the nonlinear magnetic ball suspension system. The magnetic systems are floating systems in which the main target of control is the preservation of the ball at the desired point with a certain distance from the core without any physical contact. Figure 2 shows the magnetic suspension system which includes a ferromagnetic ball, a sensor for position detestation of the ball, an actuator and flow controller. Consider the following magnetic ball suspension system: \begin{aligned}& \begin{array}{ll}& {\mathop {x_{1} }\limits^{ \bullet } = x_{2} } \\ &{\mathop {x_{2} }\limits^{ \bullet } = g - \frac{{(1/L)^{2} u^{2} }}{{m.d(x_{1} )}}} \\ \end{array} \hfill \\ &d(x_{1} ) = a_{4} .x^{4} + a_{3} .x^{3} + a_{2} .x^{2} + a_{1} .x + a_{0} , \hfill \\ \end{aligned} (2) where $$x$$ and $$u$$ are the state vector and the control input vector, respectively. d(x1) obtained in the laboratory is a polynomial in x1 which illustrates the ratio between the amount of flow and the position of the ball. m is the mass of the ball and g is the gravitational force, L shows induction. x1 and x2 are the ball position and the velocity of ball, respectively. The equivalent law control is obtained from the time derivative of the sliding surface. The sliding surface is defined as the following equation in which E, c and s illustrate error, sliding surface coefficient, and sliding surface, respectively. \begin{aligned} &s = c.E\,,\,\,\,E = x_{d} - x \hfill \\ & s = [\begin{array}{*{20}ll} {c_{1} } & {c_{2} ]\left[ {\begin{array}{*{20}c} {x_{1d} - x_{1} } \\ {x_{2d} - x_{2} } \\ \end{array} } \right] \to s = c_{1} (x_{1d} - x_{1} ) + c_{2} (x_{2d} - x_{2} )} \\ \end{array} \hfill \\ &\mathop s\limits^{ \bullet } = 0 \to c_{1} \mathop {x_{1d} }\limits^{ \bullet } - c_{1} \mathop {x_{1} }\limits^{ \bullet } + c_{2} \mathop {x_{2d} }\limits^{ \bullet } - c_{2} \mathop {x_{2} }\limits^{ \bullet } = 0 \hfill \\ & - c_{1} x_{2} - c_{2} \left(g - \frac{{G^{2} .u^{2} }}{{m.d(x_{1} )}}\right) + c_{1} \mathop {x_{1d} }\limits^{ \bullet } + c_{2} \mathop {x_{2d} }\limits^{ \bullet } = 0 \hfill \\ & u_{\text{eq}} = \frac{{\sqrt {m.d(x_{1} ) \left[g - \frac{{c_{1} }}{{c_{2} }} ( {{x_{1d}}^{ \bullet }} - x_{2} ) - {{x}_{2d}^{ \bullet }} \right]} } }{G}. \hfill \\ \end{aligned} (3) The equivalent law guarantees rapid convergence of state variables to the sliding surface, but to remain on the sliding surface it is assumed that the usw is defined as follows: $$u_{\text{sw}} = g - \frac{{c_{1} }}{{c_{2} }}(x_{2d} - x_{2} ) - \mathop {x_{2d} }\limits^{ \bullet } .$$ (4) The sliding surface coefficients (ci) can be computed by the GC algorithm. Finally, the total control signal is defined as follows. In addition, the CNF strategy is applied to Eq. (1) in which $$\psi (s)$$ is a semi-positive function and arbitrary: \begin{aligned}& \mathop s\limits^{ \bullet } = - c_{2} k.{\text{sat}}(s/\varphi ) - c_{2} \psi (s).{\text{sat}}(s/\varphi ) \hfill \\& u_{T} = \frac{{\sqrt {m.d(x_{1} )(u_{s} - (k + \psi (s)).{\text{sat}}(s/\varphi )} )}}{G}. \hfill \\ \end{aligned} (5) It should be noted that the $$\psi (s)$$ function increases the degree of freedom of the control rule [23]. Therefore, in this case, the CNF-based SMC approach is realized. ### The incremental SMC-based CNF strategy for the inverted pendulum The inverted pendulum system is known as one of the popular and important laboratory models for teaching underactuated systems, as shown in Fig. 3. The underactuated systems do not have the ability to control a trajectory, in its own operating point, due to different causes. One of the common problems of controlling underactuated systems is the numerical difference between the degrees of freedom of system and number of its actuator. For these systems, designing a conventional sliding mode surface is not appropriate, because the parameters of the sliding mode surface cannot be obtained directly according to the Hurwitz condition [13]. Therefore, the incremental SMC based on CNF has been proposed in this paper. The general form of an underactuated system is presented as follows: \begin{aligned} &\dot{x}_{{1}{2n - 1}} = x_{2n} \hfill \\ & \dot{x}_{{1}{2n}} = f_{n} (X) + b_{n} (X)u \hfill \\ \end{aligned} (6) where $$X = \left[ {x_{1} ,x_{2} , \ldots x_{2n} } \right]^{T}$$ is the state variable, u illustrated the system input, and fn and bn are bounded nominal functions. Also, for inverted pendulum, fi and bi have been defined as follows [12]: \begin{aligned} & f_{1} = \frac{{mL\dot{\theta }^{2} \sin \theta + mg\sin \theta \cos \theta }}{{M + m\sin^{2} \theta }}; \quad b_{1} = \frac{1}{{M + m\sin^{2} \theta }} \hfill \\ &f_{2} = - \frac{{(m + M)g\sin \theta + mL\dot{\theta }^{2} \sin \theta \cos \theta }}{{(M + m\sin^{2} \theta )L}}; \quad b_{2} = - \frac{\cos \theta }{{(M + m\sin^{2} \theta )L}}. \hfill \\ \end{aligned} (7) The main advantage of the incremental SMC-based CNF approach is to collect all the sliding surfaces on the final surface. In fact, the problems of dividing the system into several subsystems and controlling a high-order SMC and determining the coefficients with the Hurwitz polynomials almost disappear. The first surface is defined as follows: \begin{aligned} &s_{1} = c_{1} x_{1} + c_{2} x_{2} \quad \dot{s}_{1} = 0 \to \hfill \\ &0 = c_{1} \mathop x\limits^{ \bullet }_{1} + c_{2} \mathop x\limits^{ \bullet }_{2} \to c_{1} x_{2} + c_{2} (f_{1} + b_{1} u) \to \hfill \\ & u_{{{\text{eq}}(1)}} = - \frac{{c_{2} f_{1} + c_{1} x_{2} }}{{c_{2} b_{1} }}. \hfill \\ \end{aligned} (8) For the state variables of the i-th subsystem, the sliding mode surface is defined as follows: \begin{aligned} &s_{2} = c_{3} x_{3} + s_{1} , \hfill \\ &s_{i} = c_{i + 1} x_{i + 1} + s_{i - 1} . \hfill \\ \end{aligned} (9) The total equivalent law control is obtained from $$\dot{s}_{i} = 0$$. The usw with the CNF strategy is defined as follows: \begin{aligned} & \mathop s\nolimits^{\bullet }_{i} = \sum\limits_{j = 1}^{m} {c_{2j - 1} .x_{2j} + \sum\limits_{j = 1}^{m} {c_{2j} } } (f_{j} + b_{j} u + d_{j} )\,\,;\\&m = \left\{ {\begin{array}{ll} {(i + 1)/2\,\,\,i\,\,{\text{is}}\,\,\,{\text{odd}}} \\ {i/2\,\,\,\,\,\,\,\,\,\,\,i\,\,\,\,{\text{is}}\,\,\,\,{\text{even}}} \\ \end{array} } \right. \hfill \\ & u_{{{\text{sliding}}(i)}} = - \frac{{\sum\nolimits_{j = 1}^{m} {c_{2j - 1} .x_{2j} + \sum\nolimits_{j = 1}^{m} {c_{2j} } } f_{j} }}{{\sum\nolimits_{j = 1}^{m} {c_{2j} b_{j} } }} \hfill \\ & u_{{{\text{sw}}(i)}} = \left\{ {\begin{array}{ll} 0 & {i = 1} \\ {\sum\limits_{j = 1}^{i} {\eta_{j} \text{sgn} (s_{j} )/{\text{den}}(i)} } & {i > 1} \\ \end{array} } \right. \hfill \\ & {\text{den}}(i) = c_{2} b_{1} + \sum\limits_{j = 2}^{m} {(c_{2j} } .(b_{j} + \psi (s_{2j - 1} )).\text{sgn} (s_{2j - 1} )) \hfill \\ \end{aligned} (10) ### The nonlinear $$\psi \,(s)$$ function in the CNF The nonlinear $$\psi (s)$$ function selection method is expressed in [8, 30, 31, 32]. Arbitrary choice of $$\psi (s)$$ function leads to an acceptable response. The main purpose of adding this nonlinear function to the control low is improving the settling time and reducing the tracking error. This function must be selected in such a way that supplies the following features: when the system state variables are far from the desired value, the reference input from the nonlinear term is diminished; hence the nonlinear effect of the control low is very limited. Also, when the system state variables reach the desired value, the reference input from the nonlinear term enlarges, thus the nonlinear section of the control low will be effective. A nonlinear $$\psi (s)$$ function is defined as an exponential function as follows: $$\psi (s) = - \beta e^{ - \alpha \parallel s\parallel } ,$$ (11) where $$\alpha$$ and $$\beta$$ are the two positive parameters designed by GC algorithm. According to the Eq. (11), when s is large, $$\psi (s)$$ is small and vice versa. ### The optimization In the control low, there are three constants: C1 and C2 are sliding surface coefficients and k is the coefficient of switching low. The nonlinear $$\psi (s)$$ function contains two parameters ($$\alpha$$, $$\beta$$). With respect to the stability equation, the limits of these coefficients can be determined; it is very time consuming to set the parameters to reach the optimum condition. Therefore, by defining the cost function in the form of the following equation and using the GC algorithm, the suitable coefficients with the least error rate are introduced [24, 25, 26]: $$j = \sum\limits_{i = 0}^{n} {(x_{i}^{T} rx + } u_{i}^{T} qu),$$ (12) where j, x and u are taken as the cost function, the state variables, and the control input, respectively. Also, r and q are also taken as the identity square matrix. The main characteristic of the genetic algorithm is the simultaneous evaluation of several solutions. The cuckoo search algorithm is a global random interactive search algorithm inspired by nature. The basis of the aforementioned algorithm is the combination of the behavior of a particular species of cuckoo birds with the behavior of flying levy birds. This particular species of cuckoo birds have the ability to select new spawned nests and eliminate their eggs, which increases the probability of the birth of their babies. Therefore, their eggs are placed in the nest of host birds. On the other hand, some host birds are able to fight this parasitic behavior of cuckoo birds and throw out foreign eggs, discover or build new nests in the new place. The process of reproduction of cuckoo birds is described by three simple rules [27, 28, 29]. 1. 1. Each cuckoo bird collects an egg at a time and randomly places it in a selected nest. 2. 2. High-quality nests are selected for re-laying. 3. 3. The number of host nests is constant and a host with a certain probability identifies a foreign egg. The motivation behind the development of the hybrid CS-GA algorithm is to combine the benefits of both cuckoo search and genetic algorithm. The GC algorithm is summarized as follows: 1. 1. Setting: production number is selected $$t = 1$$. Based on the cuckoo algorithm, the primary population is produced. 2. 2. Population update: as long as the conditions for the moratorium are not established, the new population is being implemented. The cost function is calculated on the basis of the levy’s flight for each population. ## The main results In this section, the finite time stability for magnetic ball suspension and inverted pendulum system has been proved. ### The magnetic ball suspension system stability For the system stability, the Lyapunov function is defined as $$V = \frac{1}{2}\parallel S\parallel$$ which is a positive function. Given the Lyapunov stability, if $$\dot{V} < - \eta ,\,\,\,\eta > 0$$, then it will be established as finite time stability, and each state variable on the sliding surface will move in a finite time to zero [11, 30]. $$\mathop V\limits^{ \bullet } = \frac{1}{\parallel s\parallel }s.\mathop s\limits^{ \bullet } = \frac{1}{\parallel s\parallel }s.c.E,\,\,\,\,\frac{1}{\parallel s\parallel }s.[c_{1} (\mathop {x_{1d} }\limits^{ \bullet } - \mathop {x_{1} }\limits^{ \bullet } ) + c_{2} (\mathop {x_{2d} }\limits^{ \bullet } - \mathop {x_{2} }\limits^{ \bullet } )]$$ (13) By applying the magnetic ball suspension system model and the SMC-CNF approach to (13), $$\mathop V\limits^{ \bullet }$$ is obtained as follows: \begin{aligned} &= \frac{1}{\parallel s\parallel }s.\left[c_{1} \mathop {x_{1d} }\limits^{ \bullet } - c_{1} x_{2} + c_{2} \mathop {x_{2d} }\limits^{ \bullet } - c_{2} \left(g - \frac{{G^{2} .u^{2} }}{{m.d(x_{1} )}}\right)\right] \hfill \\ &= s.\left[c_{1} \mathop {x_{1d} }\limits^{ \bullet } - c_{1} x_{2} + c_{2} \mathop {x_{2d} }\limits^{ \bullet } \,\,\,;\,\,\, - c_{2} \left(g - \left(u_{s} - k.{\text{sat}}(s/\varphi\right )\right)\right] \hfill \\ & = \frac{1}{\parallel s\parallel }s.\left[c_{1} \mathop {x_{1d} }\limits^{ \bullet } - c_{1} x_{2} + c_{2} \mathop {x_{2d} }\limits^{ \bullet } - c_{2} g\right.\\ & \quad \left.+ c_{2} \left(g - \frac{{c_{1} }}{{c_{2} }}\left(x_{1d} - x_{2} \right) - \mathop {x_{2d} }\limits^{ \bullet } - k.{\text{sat}}\left(s/\varphi \right) \right) - \psi (s).{\text{sat}}(s/\phi )\right], \hfill \\ \end{aligned} where $$(\mathop {x_{2d} }\limits^{ \bullet } - \mathop {x_{2} }\limits^{ \bullet } ) = - \frac{{c_{1} }}{{c_{2} }}(x_{2d} - x_{2} )$$, if $$\frac{{c_{1} }}{{c_{2} }} > 0$$ then the stability condition will be as follows: $$\mathop V\limits^{ \bullet } < - \frac{{c_{2} (k + \psi (s))}}{\parallel s\parallel }s.{\text{sat}}(s/\varphi )).$$ (14) If $$\varphi$$ is large enough to be selected, then $${\text{sat}}(s/\varphi ) \cong \text{sgn} (s) = \frac{s}{\parallel s\parallel }$$ and the following equation is obtained: \begin{aligned} \mathop V\limits^{ \bullet } < - k.c_{2} .\frac{{s^{2} }}{{\parallel s\parallel^{2} }} = - k.c_{2} - c_{2} .\psi (s) \hfill \\ < - k.c_{2} < 0. \hfill \\ \end{aligned} (15) Given that the time derivative V is less than a constant negative value, V tends to be asymptotically zero. To calculate the T convergence time to zero, it is sufficient to integrate from Eq. (15). $$\int_{0}^{T} {\mathop V\limits^{ \bullet } (t){\text{d}}t} \le - \int_{0}^{T} {k.c_{2} {\text{d}}t \to V(T) - V(0) \le - k.c_{2} .T}$$ (16) As a result of $$V(T) = 0$$: $$T \le \frac{V(0)}{{c_{2} .k}}.$$ (17) Equation (17) will be established, which means that $$\dot{V}$$ has a negative value and ensures that the system is stable for a finite time. ### The stability analysis To analyze the stability of underactuated systems, the Lyapunov function is considered as follows: \begin{aligned} & V_{2n - 1} = \frac{1}{{2\parallel s_{2n - 1} \parallel }}s^{2}_{2n - 1} \hfill \\ & \to \mathop V\limits^{ \bullet }_{2n - 1} = \frac{{s_{{_{2n - 1} }} .\dot{s}_{{_{2n - 1} }} }}{{\parallel s_{2n - 1} \parallel }} = \frac{{s_{{_{2n - 1} }} }}{{\parallel s_{2n - 1} \parallel }}(c_{{_{2n - 1} }} \dot{x}_{2n} + {\dot{s}_{2n - 2}} ) \hfill \\& = \frac{{s_{2n - 1} }}{{\parallel s_{2n - 1} \parallel }}\left( {c_{2n - 1} [f_{n} + b_{n} u] + c_{2n - 2} x_{2n} + } \right.c_{2n - 3} [f_{n - 1} + b_{n - 1} u] + \ldots + c_{1} x_{2} + f_{1} + b_{1} u]\left. {} \right) \hfill \\ &= \frac{{s_{2n - 1} }}{{\parallel s_{2n - 1} \parallel }}\left\{ \sum\limits_{i = 2}^{n} {(c_{2i - 1} f_{i} + c_{2i - 2} x_{2i} )} + (f_{1} + c_{1} x_{2} ) + \left[\sum\limits_{i = 2}^{n} {(c_{2i - 1} b_{i} )} + b_{1} \right]u\right\} \hfill \\ \end{aligned} (18) By applying the total controller in Eq. (1), the relationships will be as follows: \begin{aligned} \mathop V\limits^{ \bullet }_{2n - 1} &= \frac{{s_{{_{2n - 1} }} }}{{\parallel s_{2n - 1} \parallel }}\Bigg\{ \sum\limits_{i = 2}^{n} {(c_{2i - 1} f_{i} + c_{2i - 2} x_{2i} )} + (f_{1} + c_{1} x_{2} ) + \Bigg[\sum\limits_{i = 2}^{n} {(c_{2i - 1} b_{i} )} + b_{1} \Bigg](u_{eq} + u_{sw} \Bigg\} \hfill \\ &= \frac{{s_{{_{2n - 1} }} }}{{\parallel s_{2n - 1} \parallel }}\Bigg\{ \sum\limits_{i = 2}^{n} {(c_{2i - 1} f_{i} + c_{2i - 2} x_{2i} )} + (f_{1} + c_{1} x_{2} ) + \Bigg[\sum\limits_{i = 2}^{n} {(c_{2i - 1} b_{i} ) + b_{1} \Bigg]u_{eq} } + \Bigg [\sum\limits_{i = 2}^{n} {(c_{2i - 1} b_{2i} ) + b_{1}\Bigg ]u_{sw} } \Bigg\} . \hfill \\ \end{aligned} (19) Considering the values of usw and $$u_{sliding}$$ assumptions $$\eta ,k > 0$$ then: \begin{aligned} \mathop V\limits^{ \bullet }_{2n - 1} &= - \frac{{s_{{_{2n - 1} }} }}{{\parallel s_{2n - 1} \parallel }}.(\eta + \psi (s_{sj - 1} )).{\text{sgn}}(s_{2n - 1} ) - k.s^{2}_{2n - 1} \\ & \quad = - \eta - \psi (s_{sj - 1} ) - k.s^{2}_{2n - 1} \le - \eta . \end{aligned} (20) As a result, the closed-loop system will have finite time stability. ## The simulation results In this section, the examples illustrate the advantages of the proposed control strategy. In the first example, the SMC based on CNF is applied to the magnetic ball suspension system. In the second example, the inverted pendulum is given and the proposed controller designed in Eq. (10) is employed to stabilize the closed-loop system. The magnetic ball suspension system parameters are introduced in Table 1. By MATLAB simulation, the state variables are illustrated in Figs. 4, 5, 6, and 7; Fig. 4 shows the tracking path, as $$x_{d1} = 0.06 + 0.015{ \sin }(0.7\pi t)$$ for the arbitrary position of the track. d−(x1) coefficients and correlation are obtained experimentally as Table 2. Table 1 The constant parameters in the magnetic ball suspension system Parameters Magnitudes Coil resistance (R) 52 Ω Coil inductance (L) 1.227 H Ball mass 16.5 g The initial distance from the core 50 mm Table 2 Relative displacement and required current x1, ball position (mm) i, coil current (amp) 30 0.114 40 0.236 50 0.376 60 0.523 70 0.746 \begin{aligned} &f(i,x_{1} ) = \frac{{i^{2} }}{{d(x_{1} )}}, \end{aligned} \begin{aligned} & \begin{array}{ll} &{\begin{array}{ll}& {\mathop {x_{1} }\limits^{ \bullet } = x_{2} } \\ & {\mathop {x_{2} }\limits^{ \bullet } = g - \frac{{(1/L)^{2} u^{2} }}{{m.d(x_{1} )}}} \\ \end{array} } \\ & {d(x_{1} ) = a_{4} .x^{4} + a_{3} .x^{3} + a_{2} .x^{2} + a_{1} .x + a_{0} } \\ \end{array} \end{aligned} \begin{aligned} & \begin{array}{ll} {a_{4} = - 176896.25\begin{array}{ll} {} & {a_{3} = 84793.69\begin{array}{ll} {} & {} \\ \end{array} } \\ \end{array} } \\ & {a_{2} = 7685.55\begin{array}{ll} {} & {a_{1} = 284.79\begin{array}{ll} {} & {a_{0} = - 3.7} \\ \end{array} } \\ \end{array} } \\ \end{array} \end{aligned} $$u_{T} = \frac{{\sqrt {m.d(x_{1} )(u_{s} - (k + \psi (s)).sat(s/\varphi )} )}}{G},$$ $$s = c_{1} (x_{1d} - x_{1} ) + c_{2} (x_{2d} - x_{2} ),$$ $$\psi (s) = - \beta e^{ - \alpha \parallel s\parallel } .$$ In Fig. 5, using the GC algorithm, the most suitable compromise between the amount of input to the coil and the displacement of the ball is investigated, which indicates that the tracking error has been reduced significantly. $$\begin{array}{ll} &{a_{4} = - 1225.25\begin{array}{ll} {} & {a_{3} = 3150} \\ \end{array} } \\ & {a_{2} = 7720\begin{array}{ll} {} & {a_{1} = 522\begin{array}{ll} {} & {a_{0} = - 4.2} \\ \end{array} } \\ \end{array} } \\ \end{array}$$ Figure 6 shows that the SMC-CNF control is not sensitive to the system parameters changing, because there is no significant change in the system response even with a tenfold mass. Figure 7 shows the control signal input, which is smooth. Assuming the initial conditions below and determining the coefficients by the GC algorithm, the simulation results are shown in Figs. 8 through 11 for the inverted pendulum. The inverted pendulum system parameters are introduced in Table 3. Table 3 The constant parameters in the inverted pendulum system Pendulum mass (m) 1 kg Cart mass (M) $$1\,\,{\text{kg}}$$ Friction of the cart $$0.1\,{\text{N/m/s}}$$ Length of the pendulum (l) $$0.1\,\,{\text{m}}$$ Inertia of the pendulum (i) $$0.006\,\,{\text{kg}} . {\text{m}}^{ 2}$$ Gravity (g) 9.8 m/s2 \begin{aligned} x_{1} & = 0,\;x_{2} = 0,\;x_{3} = pi{\text{/}}3,\;x_{4} = 0 \\ c & = {\text{ }}\left[ {\begin{array}{*{20}l} { - 0.3643} & { - 0.7448} & {3.9157} & {0.7355} & { - 0.4195} & {0.0412} & {0.7938} & {5.1554} \\ \end{array} } \right] \\ s_{1} & = c_{1} x_{1} + c_{2} x_{2} \\ s_{2} & = c_{3} x_{3} + s_{1} \\ s_{3} & = c_{4} x_{4} + s_{2} \\ \end{aligned} As can be seen, the sliding surfaces converge to the zero very fast. By applying UT controller in Eq. (1) to the model of the inverted pendulum, Eqs. (6) and (7) the cart and the pendulum position are obtained. As it can be seen, the SMC strategy stabilizes the closed-loop system and provides the high overshoot and long settling time; meanwhile, using the CNF-SMC the settling time has been reduced and the overshoot has been eliminated. The closed-loop inverted pendulum system state variables are illustrated in Figs. 9 and 10; the proposed approach can effectively stabilize and improve the closed-loop system and the transient performance. The overshoot and settling time of the closed-loop system states in Table 4 reveal that the proposed approach provides favorable transient performance. Finally, Fig. 11 illustrates the control signal. Comparison of the results indicates that the control effort of the proposed approach is smaller and smoother than the SMC and there is no chattering in the proposed approach. Table 4 Transient response performance CNF-SMC SMC Settling time Over/undershoot Settling time Over/undershoot Cart position 19 2.1 18.2 2.43 Pendulum position 2.2 0.24 5.3 0.41 ## Conclusion In the investigation presented here, a kind of incremental SMC-based CNF strategy is newly designed considering the magnetic ball suspension and the inverted pendulum systems to be handled. The selection of all the tuning parameters regarding the aforementioned SMC-based CNF strategy is turned into a minimization problem and solved automatically by the GC algorithm. It should be noted that the Lyapunov stability theory is used to prove the finite time closed-loop stability of the magnetic ball suspension system and also the inverted pendulum system. By the proposed control approach, the convergence of the state variables to the sliding surfaces and the equilibrium points in the finite time is guaranteed. The main advantage of the proposed approach is that the controller does not show any sensitivity to the system parameters changing, such as ball mass and the sensors inaccuracy in determining the ball position for the tracking. The simulation results illustrate that adding the CNF approach improves the transient performance of the closed-loop system. Also, by applying the incremental SMC-based CNF strategy to the inverted pendulum system, the states variables converge to their equilibrium point with acceptable overshoot and its settling time. Using other control techniques such as the fuzzy-based solutions or in general, the intelligent control approaches instead of the SMC can be a new approach to the nonlinear systems via the CNF. Applying the CNF strategy to the singular systems and also the hybrid systems is the other suggestion in this area for the future researches. ## References 1. 1. Liu L, Pu J, Song X, Fu Z, Wang X (2014) Adaptive sliding mode control of uncertain chaotic systems with input nonlinearity. Nonlinear Dyn 76(4):1857–1865. 2. 2. Zheng Z, Sun W, Chen H, Yeow JTW (2014) Integral sliding mode based optimal composite nonlinear feedback control for a class of systems. Control Theory Technol 12(2):139–146. 3. 3. Wang J, Zhao J (2016) On improving transient performance in tracking control for switched systems with input saturation via composite nonlinear feedback. Int J Robust Nonlinear Control 26(3):509–518. 4. 4. Mobayen S, Majd VJ, Sojoodi M (2012) An LMI-based composite nonlinear feedback terminal sliding-mode controller design for disturbed MIMO systems. Math Comput Simul 85:1–10. 5. 5. Huang Y, Cheng G (2015) A robust composite nonlinear control scheme for servomotor speed regulation. Int J Control 88(1):104–112. 6. 6. Mobayen S, Tchier F (2017) Composite nonlinear feedback control technique for master/slave synchronization of nonlinear systems. J Nonlinear Dyn 87(3):1731–1747. 7. 7. Yahaya M, Shahdan Sudin S, Ramli L, Khairi M, Ghazali R (2015) A reduce chattering problem using composite nonlinear feedback and proportional integral sliding mode control. In: IEEE international control conference Asian 10th (ASCC), pp. 1–6. 8. 8. Ebrahimi Mollabashi H, Mazinan AH, Hamidi H (2018) Takagi–Sugeno fuzzy-based CNF control approach considering a class of constrained nonlinear systems. IETE J Res (TIJR). 9. 9. Vrkalovic S, Teban T-A, Borlea I-D (2017) Stable Takagi–Sugeno fuzzy control designed by optimization. Int J Artif Intell 15(2):17–29Google Scholar 10. 10. Sanchez MA, Castillo O, Castro JR (2015) Information granule formation via the concept of uncertainty-based information with Interval Type-2 fuzzy sets representation and Takagi Sugeno–Kang consequents optimized with cuckoo search. J Appl Soft Comput 27(C):602–609. 11. 11. Ebrahimi Mollabashi H, Mazinan AH (2018) Adaptive composite non-linear feedback-based sliding mode control for non-linear systems. Inst Eng Technol (IET) 54(16):973–974. 12. 12. Ebrahimi Mollabashi H, Rajabpoor M, Rastegarpour S (2013) Inverted pendulum control with pole assignment, LQR and multiple layers sliding mode control. J Basic Appl Sci Res 3(1):363–368Google Scholar 13. 13. Ebrahimi H, Shahmansoorian A, Rastegarpour S, Mazinan AH (2013) New approach to control of ball and beam system and optimization with a genetic algorithm. Life Sci J 10(5s):415–421Google Scholar 14. 14. Gonzalez CI, Melin P, Castro JR, Castillo O, Mendoza O (2015) Optimization of interval type-2 fuzzy systems for image edge detection. J Appl Soft Comput 47:631–643. 15. 15. Olivas F, Amador L, Perez J, Caraveo C, Valdez F, Castillo O (2017) Comparative study of type-2 fuzzy particle swarm, bee colony and bat algorithms in optimization of fuzzy controllers. Algorithms 10(3):101–109. 16. 16. Beatriz G, Fevrier V, Patricia M, German P (2015) Fuzzy logic in the gravitational search algorithm for the optimization of modular neural networks in pattern recognition. Expert Syst Appl 42(14):5839–5847. 17. 17. Rodríguez L, Castillo O, Soria J, Melin P, Valdez F, Gonzalez CI, Martinez GE, Soto J (2017) A fuzzy hierarchical operator in the grey wolf optimizer algorithm. Appl Soft Comput 57:315–328. 18. 18. Yang X-S, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24(1):169–174. 19. 19. Kanagaraj G, Ponnambalam SG, Jawahar N (2013) A hybrid cuckoo search and genetic algorithm for reliability–redundancy allocation problems. Comput Ind Eng 66:1115–1124. 20. 20. Olivas F, Valdez F, Castillo O, Gonzalez CI, Martinez G, Melin P (2017) Ant colony optimization with dynamic parameter adaptation based on interval type-2 fuzzy logic systems. Appl Soft Comput 53:74–87. 21. 21. Saadat J, Moallem P, Koofigar H (2017) Training echo state neural network using harmony search algorithm. Int J Artif Intell 15(1):163–179. 22. 22. Valdez F, Melin P, Castillo O (2014) Modular neural networks architecture optimization with a new nature-inspired method using a fuzzy combination of particle swarm optimization and genetic algorithms. Inf Sci 270:143–153. 23. 23. Lin D, Lan W (2015) Output feedback composite nonlinear feedback control for singular systems with input saturation. J Frankl Inst 352(1):384–398. 24. 24. Precup R-E, David R-C, Petriu EM (2017) Grey wolf optimizer algorithm-based tuning of fuzzy control systems with reduced parametric sensitivity. IEEE Trans Ind Electron 64(1):527–534. 25. 25. Cervantes L, Castillo O, Hidalgo D, Martinez R (2018) Fuzzy dynamic adaptation of gap generation and mutation in genetic optimization of type 2 fuzzy controllers. Adv Oper Res. 26. 26. Pazooki M, Mazinan AH (2018) Hybrid fuzzy-based sliding-mode control approach, optimized by genetic algorithm for quadrotor unmanned aerial vehicles. Complex Intell Syst 4(2):79–93. 27. 27. Guerrero M, Castillo O, García M (2015) Fuzzy dynamic parameters adaptation in the Cuckoo Search Algorithm using fuzzy logic. IEEE Congr Evol Comput (CEC). 28. 28. Sanchez MA, Castillo O, Castro JR (2015) Generalized type-2 fuzzy Systems for controlling a mobile robot and a performance comparison with interval type-2 and type-1 fuzzy systems. Int J Expert Syst Appl 42(14):5904–5914. 29. 29. Marcek D (2018) Forecasting of financial data: a novel fuzzy logic neural network based on error-correction concept and statistics. Complex Intell Syst 2(2):95–104. 30. 30. He Y, Chen BM, Wu C (2005) Composite nonlinear control with state and measurement feedback for general multivariable systems with input saturation. Syst Control Lett 54:455–469. 31. 31. Naz N, Malik MB, Salman M (2013) Real-time implementation of feedback linearizing controllers for magnetic levitation system. In: IEEE conference on systems, process and control (ICSPC), pp 52–55Google Scholar 32. 32. Mobayen S (2014) Design of CNF-based nonlinear integral sliding surface for matched uncertain linear systems with multiple state-delays. Nonlinear Dyn 77(3):1047–1054.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948479533195496, "perplexity": 2338.4522029013187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00029.warc.gz"}
http://www.siam.org/meetings/an98/ss4.htm
## SS4 The George Polya Prize 4:45 PM-5:45 PM Chair: John Guckenheimer, President, SIAM; and Cornell University Room: Convocation Hall The Polya Prize, established in 1969, is awarded in 1998 for notable contributions in an area of inerest to George Polya. The prize is broadly intended to recognize specific recent work. The 1998 recipients are Percy Deift, Xin Zhou, and Peter Sarnak. Of the three winners, Xin Zhou and Peter Sarnak will each give a presentation. Steepest Descent Method for Riemann-Hilbert Problems in Pure and Applied Mathematics A surprisingly large variety of problems in mathematics can be formulated in terms of a Riemann-Hilbert Problem (RHP). Typically, such RHP's contain oscillatory multipliers $e^{ix\theta(z)}$, and the basic analytic issue is to compute explicitly, with classical error bounds, the asymptotics of the solution of the RHP as the (external) parameter $x$ becomes large. We describe a steepest-descent type method introduced by Deift and Zhou to analyze such RHP's as $x\to\infty$. This leads in turn to the solution of a large variaty of asymptotic problems in pure and applied mathematics. Zeros of Zeta Functions and Symmetry The high zeros of a function like the Riemann zeta functions or the low zeros of families of such functions apparently obey distribution laws associated with eigenvalues of matrices in large classical groups. The speaker will give a brief review of what is known about this phenomenon and its cause. 4:45 Introduction and Presentation of Award John Guckenheimer, President, SIAM; and Cornell University 4:55 Steepest Descent Method for Riemann-Hilbert Problems in Pure and Applied Mathematics Xin Zhou, Duke University 5:20 Zeros of Zeta Functions and Symmetry Peter Sarnak, Princeton University LMH Created: 3/16/98; MMD Updated: 6/15/98
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117373585700989, "perplexity": 2264.46380848186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.26/warc/CC-MAIN-20150521113207-00070-ip-10-180-206-219.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/99052/residuals-in-poisson-regression
Residuals in poisson regression Zuur 2013 Beginners Guide to GLM & GLMM suggests validating a Poisson regression by plotting Pearsons residuals against fitted values. Zuur states we shouldn't see the residuals fanning out as fitted values increase, like attached (hand drawn) plot. But I thought a key characteristic of the Poisson distribution is that variance increases as mean increases. So shouldn't we expect to see increasing variation in the residuals as fitted values increase? As a result, ordinary raw residuals ($$r_i=y_i-\hat\mu_i$$) should have a spread that increases with fitted values (though not in proportion). However, Pearson residuals are residuals divided by the square root of the variance according to the model ($$r^P_i=\frac{y_i-\hat\mu_i}{\sqrt{\hat\mu_i}}$$ for a Poisson model). This means that if the model is correct, the Pearson residuals should have constant spread. • The conditional distribution of the response may be different at each combination of predictors. Hence the use of the subscript on the mean; $\mu_i$ is the population mean (and thereby also population variance) for observation $i$, given its predictor values (the values of its IVs). Feb 13, 2020 at 3:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397542476654053, "perplexity": 1211.4824474768377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00282.warc.gz"}
http://enhtech.com/standard-error/answer-slope-standard-error.php
Home > Standard Error > Slope Standard Error # Slope Standard Error of freedom computed above. Introduction to It takes into account both the unpredictable variationssocial anxiety makes it difficult for me to make connections?This statistic measures the strength of the linear relation between The TI-83 calculator is allowed in the test and it standard error (SE) from the regression output. Assume the data in Table 1 are the error additional hints Excel you are about to enter a function. standard Standard Error Of Regression Slope Excel Notice that it is inversely proportional to the square root of the sample shed light on the validity of the model assumptions. error if the sample size is large. In the hypothetical output above, Numerical example This example concerns the dataslope and the intercept) were estimated in order to estimate the sum of squares. Generated Thu, 27 Oct 2016 assumes an ordinary least squares regression. The equation for the fit can be displayed but thea measure of the accuracy of predictions. Standard Error Of Slope Excel Using the Calibration...Examine the effect of including more of the curved region on the standardUsing the Calibration... This error term has to be equal to supposed to be obvious. How to get recommendation letters for graduate admissions when my to the mean of all three trials.Hit the equal sign key to tellown disadvantages, too. (a) LINEST: You can access LINEST either through the Insert→Function...Continue to Analysis Toolpak provided with some versions includes a Regression tool. The estimated coefficient b1 is the slope of the regression line,are more accurate than in Graph B.Check the Analysis TookPak item in the dialog box, Standard Error Of The Slope Definition having 99 degrees of freedom is more extreme than 2.29. whenever the standard requirements for simple linear regression are met. closer to the line than they are in Graph B. We get the slope (b1) and theDavidwith its estimate $\widehat{\sigma}^2$ (obtained via the Maximum Likelihood estimation earlier) i.e.Temperature What to look for in regressionvariance $\sigma^2$ and therefore from a statistics point of view, useless. http://enhtech.com/standard-error/answer-slope-coefficient-standard-error.php Y - X\widehat{\beta}The first element in the> 2.29) = 0.0121 and P(t < 2.29) = 0.0121. For example, select (≠ http://www.chem.utoronto.ca/coursenotes/analsci/stats/ErrRegr.html Distribution Calculator to assess the probability associated with the test statistic. The latter case is justifiedCheck out our Statisticsthe x-variable then it is clearly not a random observance, but a fixed matrix. even more statistics, but we usually only need the first six. This is notbecomes: $Y \sim N_n(X\beta, \sigma^2 I)$.Based on the t statistic test statistic and slope Back to uncertainty of the intercept Back to the suggested exercise © 2006–2013 Dr. This t-statistic has a Student's t-distribution Standard Error Of Regression Slope Calculator More data yields a systematic reduction in the standard error of the mean, but ^2$. Many statistical software packages and some graphing calculators provide the other Rather, the standard error of the regression will merely become a more http://www.statisticshowto.com/find-standard-error-regression-slope/ error of the regression = (SQRT(1 minus adjusted-R-squared)) x STDEV.S(Y). slope that's recommended reading at Oxford University!The Y values are roughly Mobile view Standard Error of the Estimate Author(s) David M. Multiple calibrations with single values compared Standard Error Of Slope Interpretation be treated statistically in terms of the mean and standard deviation. merely becomes a more accurate estimate of the standard deviation of the noise. Therefore, the P-value is slope Similarly, an exact negative linearelectric bill (in dollars) and home size (in square feet).Linear regression without the intercept term Sometimes it is appropriate to force the regressionYou have a vector of$t$'s$(t_1,t_2,...,t_n)^{\top}$Therefore, s is the dependent variable http://enhtech.com/standard-error/answer-standard-error-of-the-regression-slope.php This is the waymicro-controller only 8 bits in size?Test Requirements The approach described in this lesson is valid Also, the estimated height of the regression line for a given value of X has T Test For Slope active 2 years ago Get the weekly newsletter! Occasionally the fraction 1/n−2 can go down (even go negative) if irrelevant variables are added. 8. Discrete The second image below shows standard-error or ask your own question. Price, part 3: transformations of slope error How To Calculate Standard Error Of Regression Coefficient accurate estimate of the true standard deviation of the noise. 9. slope Interpret Results If the sample findings are unlikely, given Step 1: Enter your data top of page. formulas in matrix form that illustrates this process. Back to the top Back to uncertainty of the regression Back to uncertainty of the Standard Error Of The Slope Estimate is$\text{var} (\widehat{\beta}) \approx \left[\widehat{\sigma}^2 (X^{\top}X)^{-1}\right]_{22}\$.Is the followingspecify the following elements. line to pass through the origin, because x and y are assumed to be proportional. So, for models fitted to the same sample of the same dependent variable,a different label for the standard error. However, other software packages might use
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088720440864563, "perplexity": 1152.6526523815933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214713.45/warc/CC-MAIN-20180819070943-20180819090943-00212.warc.gz"}
http://math.stackexchange.com/questions/59661/proportion-of-spanning-trees-in-a-network-in-a-social-media-messaging-context
# Proportion of spanning trees in a network in a social media messaging context Consider a graph, such as the following: I'm considering a model of message propagation (e.g. re-tweeting) in this network, starting from a root node (e.g. the node 1 in the lower-left). I'm modelling the message propagation in terms of trees rooted at the message source, where a node v is a parent to another node w if w first hears the message from v. • The message propagates outward from 1, in a tree which "grows" in an iterative process. In this process, nodes which have been reached are either branching nodes (i.e. are the parent of some other node), live leaves (nodes which may become branch nodes), or dead leaves. • Initially, all the neighbors of node 1 are children of 1, and are live leaves. • The message propagates by iterations, as follows. We consider an arbitrary ordering L = (ℓ1, ..., ℓn) of the live leaves at the beginning of the iteration. For each 1 ≤ j ≤ n, we do the following: 1. Decide whether the node ℓj dies (doesn't propagate the message) or becomes a branch node (propagates the message). If all of the nodes adjacent to ℓj are already in the tree, then it dies by default. 2. If ℓj becomes a branch node, we attach every neighbor v of ℓj which is not already in the tree to ℓj, as a live leaf node for the following iteration. After iterating through the elements of L, we proceed to the next iteration. • If there are no more live leaves in the tree, we stop. For the graph above, here are the trees that may be generated by this process: I'm interested in considering how many of the trees which can be generated by this process are spanning trees, i.e. contain every node in the graph. Is there any formula to determine the ratio of the spanning trees to the total number of such trees? N.B. The construction above is similar to a Galton-Watson process. However, there isn't meant to be a probabilistic model underlying the growth; the above is meant only to implicity describe a recursive process to recognise whether a subtree in the graph is valid in my model. I've added the probability tag just in case there is a useful approach from that direction. Thanks! - These aren't paths, these are spanning trees. –  Ricky Demer Aug 25 '11 at 9:00 @Ricky: Not all of them are spanning trees of the original graph. It appears that Nicola wants all subtrees $T$ of the graph that include $1$ and have the property that if $v$ is a vertex of $T$, and $vw$ is an edge of $T$ that does not lie on the unique path in $T$ from $1$ to $v$, then $T$ contains all edges adjacent to $v$. (Probably he’d also like to be able to do this with any other vertex as root.) –  Brian M. Scott Aug 25 '11 at 9:58 @Nicola G: only the four trees in the bottom row are "spanning trees": a spanning tree must cover all vertices. The second image describes a collection of subtrees of the original graph, constructed according to some rule which is possibly implicit in your post but which is not clear. (If both 2 and 3 publish – or maybe retweet – something, why do you not get all of the edges 1-2, 1-3, 2-4, 3-4?) –  Niel de Beaudrap Aug 26 '11 at 10:01 @Nicola G: I've revised your question so that it describes what it is that you are interested in. Please check to see whether it agrees with what you want to know! –  Niel de Beaudrap Aug 27 '11 at 14:36 One could try working backwards: Start with all rooted trees on $n$ vertices and then add edges to get graphs G on $n$ vertices such that the rooted tree you started with is a valid endpoint of the algorithm applied to G. Count the number of possible Gs with duplication $(=f(n))$. Then try and generate $n$-vertex graphs G from $k$ vertex trees. Count the number of possible Gs with duplication $(=g(n,k), f(n) = g(n,n))$. Probability is then $f(n) / \sum_k g(n,k)$. –  Craig Sep 6 '11 at 14:48 Okay, taking a second stab at this. Same overall technique, hopefully I can count better this time. We start with our root node, labeled '1'. We add nodes 2 through k in succession. The labels are intended to correspond to the order in which we choose them when running the algorithm. The leaves of the tree are the nodes which are "dead". We can attach each node to any of the nodes we have already placed, leading us to the conclusion that we have $(k-1)!$ labeled $k$-node trees where each node's parent has a smaller label than the node does. We will associate each tree with the sequence of parents $\{ 1 =p(2), p(3), ... p(k) \}$ where $p(j) \in [1,...,j-1]$. We are then going to add nodes and edges to complete this to a graph $G$ that can give this tree as a result of the algorithm above. For the $(n-k)$ nodes that are not in the tree, we label them (all labelings are equivalent at this point), then we can connect them to the leaves of the tree and we can connect them to each other, leading to $e_1(l;n-k) = (n-k)*l + (n-k)C2$ possible edges for $l$ leaves. Now to count within-tree edges. A node $i$ can be connected to a node $j+1 > i$ in the graph $G$ if one of three (mutually exclusive) conditions holds: 1) $i$ is the parent of $j+1$ in the tree ($i=p(j+1)$), 2) $i$ is greater than $p(j+1)$, or 3) $i$ is a leaf and $i$ is less than $p(j+1)$. This is very difficult to count. It is best to note, however, that the within-tree edges are independent of $n$, and so we can count them once for any given tree. Let $T_k$ be the set of labeled $k$-node trees as above and $T_{k,l}$ be the set of labeled $k$-node trees with $l$ leaves. Given a tree $t$ in $T_{k,l}$, let $e(t)$ be the number of edges one can add to $t$ consistent with $t$ being a potential outcome of the algorithm run on the resulting $k$-node graph. We note that since one may freely connect each leaf to every other leaf, $e(t) \geq lC2$. We will then define a polynomial $t_k(x) = \sum_{l=1}^{k-1} \left( \sum_{t \in T_{k,l}} 2^{e(t)} \right) x^l$. The coefficient of $x^l$ is the number of ways one may complete $k$-node, $l$-leaf trees to a $k$-node graph consonant with the algorithm. By definition, this coefficient is divisible by $2^{lC2}$. The first few $t_k(x)$ are as follows (I think): $\begin{array}{c} t_2(x) = x \\ t_3(x) = x + 2x^2 \\ t_4(x) = x + 14x^2 + 8x^3 \\ t_5(x) = x + 70x^2 + 264x^3 + 64x^4 \end{array}$ It is not immediately obvious that there is a recursion relation for these polynomials. One might hope that since adding a node to a $k$-node, $l$-leaf tree gives either a $(k+1)$-node, $l$-leaf tree or a $(k+1)$-node, $(l+1)$-leaf tree, that one could find a recursion relation for $[x^{l+1}]$ in $t_{k+1}(x)$ in terms of $[x^l]$ and $[x^{l+1}]$ in $t_k(x)$. I have not yet found such a relation. Why do we care about these polynomials in the first place? It's fairly simple: given the dependence of $e_1(l;n-k)$ on $l$, the number of ways to complete $k$-node trees to an $n$-node graph is $2^{(n-k)C2} * t_k(2^{n-k})$. To get the number of ways to run the algorithm on the set of $n$-node graphs, one simply sums this quantity over $k$. And of course, the number of algorithm runs that result in a spanning tree is $t_n(1)$. Update: I believe I have found a recursion for the coefficients of $t_k(x)$. I do not yet have the coefficients in closed form. This recursion requires us to divide up the set $T_{k,l}$ of $k$-node, $l$-leaf trees further. For a given $(k+1)$-node, $(l+1)$ leaf tree $t$, we will consider the last non-leaf node (say it has label $a$), and any leaf of said non-leaf node (say it has label $b$). We then remove said leaf, and decrement the labels of all the later leaves. The resulting tree, $t'$, has $k$ nodes and either $l$ or $l+1$ leaves, depending on whether $a$ had more than one leaf in $t$. This map is not a function, as we can choose $b$ arbitrarily if $a$ has more than one node, but for now we will pretend it is a function. We will call this map $f: T_{k+1,l+1} \rightarrow T_{k,l} \cup T_{k,l+1}$. If $a$ had more than one leaf in $t$, we find that $e(t) = e(t') + l$. If $a$ only had $b$ as a leaf in $t$ (so that $a$ itself is a leaf in $t'$), then we find that $e(t) = e(t') - (k-a) + l$. These equations are not difficult to verify. We now divide up our set $T_{k,l}$ into sets $T_{k,l,m,q}$, where $k$ is the number of nodes in the tree, $l$ is the number of leaves, $m$ is the label of the last non-leaf node and $q$ is the number of leaves on node $m$. We let $c_{k,l,m,q} = \sum_{t' \in T_{k,l,m,q}} 2^{e(t')}$ be the number of ways to reconstruct a $k$-node graph from trees in $T_{k,l,m,q}$. We are now going to attempt to invert $f$ on $T_{k,l,m,q}$. We can either add a leaf to the non-leaf node $m$, giving a tree $t$ in $T_{k+1,l+1,m,q+1}$, or we can add a leaf to a leaf node $m' > m$, giving a tree $t$ in $T_{k+1,l,m',1}$. In each case, we need to consider what label we are going to assign to the new leaf (and we increment all the labels after the new label), and how $e(t)$ is related to $e(t')$. In the latter case, $e(t) = e(t') - (k-m') + l$ and we have $(k+1-m')$ choices for the label of the new leaf. In the former case, $e(t) = e(t') + l$, and we have $(k+1-m)$ choices for the label of the new leaf. We point out that in this case, every tree $t$ in $T_{k+1,l+1,m,q+1}$ has $(q+1)$ not-necessarily-distinct images $t'$ under $f$, but all have the same value of $e(t')$ (this last fact is not too difficult to prove). We therefore find the following recurrences: $c_{k+1,l,m',1} = (k+1-m') * 2^{l +m' - k} \sum_{m<m'} \sum_{q} c_{k,l,m,q}$ $c_{k+1,l+1,m,q+1} = \frac{k+1-m}{q+1} * 2^l c_{k,l,m,q}$. Boundary conditions on these recurrences are as follows: $c_{k,l,m,q} = 0$ if $l\geq k$ or $q>l$ or $m+q>k$ $c_{l+1,l,1,l} = 2^{lC2}$ The coefficients of the polynomial $t_k(x)$ are $[x^l] = \sum_{m,q} c_{k,l,m,q}$. - Taking a stab at this based on my comment. We start with a k-vertex, rooted tree . We wish to add vertices and edges to get an n-vertex rooted graph G such that applying your algorithm to G can end at the k-vertex tree we started with. So suppose that the tree has depth d, and that at each level i of the tree there are $n_i$ nodes of which $l_i$ are leaves. We have in particular $\sum_{i=1}^d n_i = k-1$, $n_i \geq l_i$, $n_d = l_d$, and $n_{i+1} \geq n_i-l_i$ and we will set $n_0 = 1$ for convention. How many such trees are there? Then after we place the first $j$ levels of nodes, there are $n_j C l_j$ ways of picking which level-j nodes are leaves and $(n_{j+1}-1)C(n_j - l_j - 1)$ ways of placing the depth-$(j+1)$ nodes. Many of these placements are isomorphic, but if we assign a labeling to the tree (as we will do later), they are distinguishable. So the total number of $k$-vertex trees of depth $d$ is $\sum_{l_i, \sum n_i = k-1} \prod_{i=1}^d (n_i-1)C(n_{i-1} - l_{i-1} -1) * n_i C l_i$ $= \sum_{\{ n_i \}} \prod_{i=1}^{d-1} (n_i+n_{i+1}-1) C (n_i -1)$ where trees with a given sequence of $n_i, l_i$ correspond to a single term in the sum on the first line. We then want to label the tree with labels from $[2,...,n]$. There are $(n-1)! / (n-k)!$ ways of doing so (the root gets the special label 1). Now, given such a tree, in how many ways can we invert the algorithm to get a n-vertex graph G? Well, we will first distinguish edges between two vertices on the tree, and edges incident to (at least one of) the $(n-k)$ remaining vertices. The $(n-k)$ remaining vertices are free to connect to each other and to any leaf on the tree, for $e_1 = (n-k)C2 + (n-k)*\sum_i l_i$ potential edges. Any given vertex on the tree can be connected to a leaf at a lesser depth -- it is not difficult to see that this exhausts all possible edges that can be added to the tree when we invert the algorithm. That gives $e_2 = \sum_i n_i \sum_{j<i} l_j$ possible edges. So the number of n-vertex graphs that can give this tree as output is $2^{e_1 + e_2}$. We can combine these two formulas to get $g(n,k)$. I can't figure out how to evaluate this exactly at this point, but it is readily calculable for small $n,k$. Once we've done this, we can compute the average proportion of algorithm outputs that are spanning trees. (Notice that I did not require that my graph G was connected to begin with -- obviously no disconnected graph will give rise to a spanning tree). - Okay, I inverted the algorithm incorrectly. The correct way of inverting the algorithm would impose a labelling on the tree where each node has a label in $[2,...,k]$ and every child has a label greater than its parent. Then $e_1$ is defined correctly, but counting the within-tree edges would involve 1) connecting from a node with label i to non-leaf nodes with labels in $(p(i),i)$, where $p(i)$ is the label of the parent of node $i$ and also 2) connecting to leaf nodes $j$ with $j<i$. Not sure how to count these. –  Craig Sep 8 '11 at 20:31 What is the big C? –  graphtheory92 Sep 9 '11 at 14:02 "Choose" -- it's the binomial coefficient. –  Craig Sep 9 '11 at 15:26 FYI, I'm working on a slight variant of my counting argument, which will hopefully lead to some more concrete results. It involves an interesting set of polynomials $T_k(x)$ (of degree $k-1$), where the $l$th coefficient of $T_k(x)$ is the number of ways to invert the algorithm on a $l$-leaf, $k$-node labeled tree to a $k$-vertex graph (or conversely the number of ways to run the algorithm on a $k$-vertex graph to get an $l$-leaf spanning tree). –  Craig Sep 9 '11 at 15:30 Yes, leaves of the tree are the "dead" nodes. The counting method I used was supplanted by the one in the other answer, but I'll explain what I was doing here. That sum is attempting to count the number of $k$-vertex trees of depth $d$ by looking at the number of nodes ($n_i$) and leaves ($l_i$) at any given depth $i\leq d$. We have that the total number of nodes is $k-1$, not counting the root. We also have that $l_i \in [n_i - n_{i+1},n_i]$. The summation is over all sets of $n_i, l_i$ satisfying these constraints. The product counts the number of ways to construct a tree with those –  Craig Sep 10 '11 at 23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314647674560547, "perplexity": 223.20936746645512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835119.25/warc/CC-MAIN-20140820021355-00444-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/work-kinetic-energy-theorem.140595/
Work-kinetic energy theorem 1. Oct 30, 2006 elitespart The CN tower in 553m tall. Suppose a chuck of ice w/ a mass of 25.0g falls from the top of the tower. The speed of the ice is 33.0m/s as it passes the restaurant in the tower located 353m above the ground. What is the average force due to air resistance? I'm having trouble getting started on this problem. I know how to calculate work and change in KE, but how do I use those to get the answer? Any help would be appreciated. 2. Oct 30, 2006 geoffjb You could start by figuring out what the ice's velocity would be without any air resistance. 3. Oct 30, 2006 elitespart ok now what 4. Oct 30, 2006 moose He's implying to look at the loss in kinetic energy... Now, how would you calculate the work done by air resistance? 5. Oct 30, 2006 elitespart find diff. b/w KE w/out air resistance and with air resistance? Last edited: Oct 30, 2006
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928431272506714, "perplexity": 919.7661295880796}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171066.47/warc/CC-MAIN-20170219104611-00340-ip-10-171-10-108.ec2.internal.warc.gz"}
http://cpr-nuclex.blogspot.com/2013/04/13044330-masaki-hori-et-al.html
## Two-photon laser spectroscopy of antiprotonic helium and the antiproton-to-electron mass ratio    [PDF] Masaki Hori, Anna Sótér, Daniel Barna, Andreas Dax, Ryugo Hayano, Susanne Friedreich, Bertalan Juhász, Thomas Pask, Eberhard Widmann, Dezsö Horváth, Luca Venturelli, Nicola Zurlo Physical laws are believed to be invariant under the combined transformations of charge, parity and time reversal (CPT symmetry). This implies that an antimatter particle has exactly the same mass and absolute value of charge as its particle counterpart. Metastable antiprotonic helium ($\bar{p}{\rm He}^+$) is a three-body atom consisting of a normal helium nucleus, an electron in its ground state and an antiproton ($\bar{p}$) occupying a Rydberg state with high principal and angular momentum quantum numbers, respectively $n$ and $\ell$, such that $n\sim\ell\sim 38$. These atoms are amenable to precision laser spectroscopy, the results of which can in principle be used to determine the antiproton-to-electron mass ratio and to constrain the equality between the antiproton and proton charges and masses. Here we report two-photon spectroscopy of antiprotonic helium, in which $\bar{p}{\rm ^3He^+}$ and $\bar{p}{\rm ^4He^+}$ isotopes are irradiated by two counter-propagating laser beams. This excites nonlinear, two-photon transitions of the antiproton of the type $(n,\ell)\rightarrow (n-2,\ell-2)$ at deep-ultraviolet wavelengths ($\lambda$=139.8, 193.0 and 197.0nm), which partly cancel the Doppler broadening of the laser resonance caused by the thermal motion of the atoms. The resulting narrow spectral lines allowed us to measure three transition frequencies with fractional precisions of 2.3-5 parts in $10^9$. By comparing the results with three-body quantum electrodynamics calculations, we derived an antiproton-to-electron mass ratio of 1,836.1526736(23), where the parenthetical error represents one standard deviation. This agrees with the proton-to-electron value known to a similar precision. View original: http://arxiv.org/abs/1304.4330
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472618103027344, "perplexity": 2716.071062600076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00315.warc.gz"}
http://physics.stackexchange.com/questions/113117/gr-matter-free-equations-and-schwarzschild-geometry
# GR matter-free equations and Schwarzschild geometry I am reading some lecture notes on General relativity (undergraduate level) and I do not understand a sequence of statements about the topics in the title. After stating that the for matter-free space the components of the Ricci tensor vanish, they go on to say that non-trivial solutions to these equations would represent the propagation of gravitational waves through otherwise empty space. Then, they proceed to using that equation to derive the Schwarzschild metric, assuming a stationary and radially symmetric situation. After getting the expression for the line element they say that even though the Ricci tensor is zero for $r>0$ this does not mean that it is zero identically and that the Schwarzschild metric is the metric of a point mass at the origin of spacial coordinate. I don't understand why this is the case and I thought that we were under the assumption that we were in empty space? Furthermore, they then state the Birkhoff's theorem and say that it means that a radially pulsating star cannot emit gravitational waves. Then when investigating black hole geometry they assume that the source of the Schwarzschild metric is some massive object within its Schwarzschild radius. This raises several questions for me. Firstly, I thought the Schwarzschild metric was for empty space so I don't understand how we can be talking about its source being a massive object. Secondly, I thought that the Schwarzschild metric was a non-trivial solution to the matter free equations and according to their initial statements this would represent gravitational waves. But this conflicts with Birkhoff's theorem. Could you help me identify what concept(s) I am misunderstanding? - ## 4 Answers There are essentially two sources of problem from what I see: first concerning solutions of Einstein Equations in vacuum, and secondly concerning Birkhoff's theorem in the case of spherically symmetrical solutions with matter. Let's tackle them one at a time. 1)The curvature of a manifold is described by the Riemann tensor $R_{\mu\nu\lambda\rho}$, that can be canonically separated in a trace part, the Ricci tensor $R_{\mu\nu}$, and a trace-free part, the Weyl tensor $C_{\mu\nu\lambda\rho}$. The Einstein Equations relate the Ricci tensor with the matter distribution by the energy-momentum tensor $T_{\mu\nu}$ in the form $R_{\mu\nu}=8\pi G(T_{\mu\nu}-1/2g_{\mu\nu}T)$. You're interested in vacuum solutions, i.e. $T_{\mu\nu}=0$, which imply Ricci-flatness $R_{\mu\nu}=0$. Now, what about the Weyl tensor? It is determined by the second Bianchi identity $R_{\mu\nu [\lambda\rho;\sigma]}=0$, or in other words it satisfies a non-linear differential equation. Now the lectures you mention state that "non-trivial solutions to these equations would represent the propagation of gravitational waves through otherwise empty space". That is incorrect, and the Schwarzschild metric is a clear example of Ricci flat spacetime without waves. In general the Petrov Classification of the Weyl tensor is a way to determine if a solution possess or not gravitational waves. In particular all solutions of Petrov type D are everywhere vacuum metrics without gravitational waves, of which Schwarzschild is an example, though not the only one. The Petrov classification is a bit high level for an introduction to general relativity, but maybe the link at wikipedia can give you the flavor of the idea. Regarding the assertion "even though the Ricci tensor is zero for $r>0$ this does not mean that it is zero identically and that the Schwarzschild metric is the metric of a point mass at the origin of spacial coordinate" this is plainly wrong. The only sense I can make of this phrase is what is contained in Jerry Schirmer's answer, where he very properly asserts that is strictly heuristical and should not be taken seriously. The reason is that unlike electromagnetism, general relativity involves non-linear equations for the metric, and that does not accommodates easily distributional sources, as Dirac delta one, otherwise the metric itself possibly won't have all the usual derivatives, which could lead to violating the Bianchi identities and consequently energy-momentum conservation. So the lectures are wrong and you are right, you're working in empty space. Furthermore Schwarzschild is a non-trivial solution in vacuum without gravitational waves. 2)In second case the lectures look at the problem of a spherically symmetrical distribution of matter contained in given finite radius, let's call it $R_s$. Now for $r<R_s$ we have non-zero energy-momentum tensor and Einstein Equation must be solved accordingly. For instance, in the simple case of isotropic matter you have TOV equation. For $r>R_s$ you are in empty space, and therefore holds Birkhoff's theorem, which is a local assertion that the metric of static and spherically symmetric solutions must be Schwarzschild. The point you're missing is that the result is local and therefore ignores what's happening inside. So you know that for radius below $R_s$ you must solve for whatever is the matter you have, outside is just Schwarzschild and at $r=R_s$ you must impose continuity of the metric, giving you the boundary condition to solve the diff. equation inside the star. Reiterating what I've previously wrote, since Birkhoff's theorem is local it does not care what is the solution inside the star. In particular it is irrelevant if the solution inside is time dependent (like an oscillation), as long as it remains spherically symmetric. So no conflict with the theorem. The same holds for electromagnetism, as in a spherical distribution of charge which oscillates radially cannot emit radiation, a consequence of Gauss' law discussed in the majority of electromagnetism textbooks. ADDENDUM: If you're having trouble with the lectures notes you're reading, and from what you say it is not your fault if it states slightly incorrect things, I suggest you try to put your hands on a copy of Hartle's book, it is the best is my opinion at undergrad level. If you don't have access to it then there's the freely available notes from Chrúsciel, which are great, albeit somewhat more high-level mathematically speaking. - Thank you very much for your comprehensive answer! I will check out the notes you referenced. I am supporting the lecture notes with other resources but the approaches in textbooks are often so different in level or technique that it is tricky. –  Student May 17 at 21:58 In fact it is common knowledge that different GR textbooks diverge widely in approach. Robert Wald has written an article regarding the teaching of GR with a list of textbooks with comments on their approaches arxiv.org/abs/gr-qc/0511073. Maybe it will help you decide for a book, and allow you to compare different ones that have similar styles so you can support one with the other. –  cesaruliana May 18 at 19:58 NOTE: this is merely a heuristic. A rigorous proof of the mass in the schwarzschild spacetime involves taking the ADM or Bondi mass, or at least using covariant integrals. This is, IMO, slightly beyond the scope of undergraduate relativity. The easiest way to see this is to note that (I'm going to abandon general tensor notation, because it confuses the issue in this case): $$G_{tt} = 8\pi T_{tt} = 8\pi \rho$$ If you use the standard expression for the Schwarzschild metric: $g_{ab} = {\rm diag}(-(1-\frac{2M}{r}), \frac{1}{1-\frac{2M}{r}},r^{2},r^{2}\sin^{2}\theta)$, you can show that $G_{tt} = \nabla^{2}\frac{M}{r}$. It turns out that this expression is equal to $8\pi M\delta^{3}(r)$, by the properties of the delta function, we therefore have $\rho = 0$ for all $r\neq 0$, but $\int d^{3}x\, \rho = M$ - I don't understand why this is the case and I thought that we were under the assumption that we were in empty space? It's a bit misleading. For the same reason we choose to solve Laplace's equation in spherical coordinates for the electric potential when there is charge only at the spatial origin, we choose to solve the vacuum equations for a spherically symmetric static spacetime in the case that there is mass-energy only at the spatial origin. - The Schwarzschild metric applies in the region outside any spherically symmetric, massive object, to no closer than the Schwarzschild radius. You might recall that, in electromagnetism, a spherically symmetric charge distribution will look like a point charge in the exterior. The same is true in general relativity for spherically symmetric mass distributions. The tendency to refer to the stress-energy tensor of a black hole as being "vacuum" is, in my opinion, unfortunate. In electromagnetism, we have no problem thinking of point charges as delta functions. The source term for a black hole could be thought of the same way. I believe this isn't done because differential geometry attacks the problem by excluding a point (a "punctured" domain) instead, so in that perspective, the stress-energy tensor is zero everywhere on the domain--it's just that the domain doesn't include the origin. - Actually, in the extended spacetime, the singularity of Schwarzschild is a spacelike line, and the Kerr/Nordstrom metric is a timelike line. This stuff turns out to be more subtle than the case in electromagnetism, so, despite what I said above about delta functions, it's not 100% correct to think of the schwarzschild matter distribution as a delta function. –  Jerry Schirmer May 17 at 19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399057626724243, "perplexity": 233.17754497176819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829955.75/warc/CC-MAIN-20140820021349-00445-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-cumulative-test-prep-multiple-choice-page-528/1
## Algebra 1 B) $2x+7y=28$ Multiply the equation by 7 $7y=2x-28$ Subtract $2x$ from both sides $-2x+7y=-28$ Multiply the equation by $-1$ $2x-7y=28$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99819016456604, "perplexity": 389.859110195266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00130.warc.gz"}
https://www.physicsforums.com/threads/parametric-equations-and-integrals-that-represent-volumes.385544/
# Parametric Equations and integrals that represent volumes 1. Mar 10, 2010 ### peterpam89 1. The problem statement, all variables and given/known data A surface S is formed by rotating a quarter ellipse C about the X-axis. Write an integral that represents the volume enclosed by S. the ellipse is represented by two points, (2,1) at which t= pi/2, and (4,0) at which t=0. 2. Relevant equations Ellipse w/radii a,b, in x,y: x= x subscript 0 + a cos (t), y = y subscript 0 + b sin (t) Cartesian equation of ellipse. 3. The attempt at a solution Eek. I don't really know where to even start with this problem... any ideas? 2. Mar 10, 2010 ### Dustinsfl You could try x(t)=2+2*cost and y(t)=sint Similar Discussions: Parametric Equations and integrals that represent volumes
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972976565361023, "perplexity": 1884.1324395863135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814124.25/warc/CC-MAIN-20180222140814-20180222160814-00761.warc.gz"}
https://www.hepdata.net/record/96046
Measurement of nuclear effects on $\psi\rm{(2S)}$ production in p-Pb collisions at $\sqrt{\textit{s}_{\rm NN}} = 8.16$ TeV JHEP 07 (2020) 237, 2020. The collaboration Abstract (data abstract) Inclusive $\psi(2{\rm S})$ production is measured in \ppb collisions at the centre-of-mass energy per nucleon--nucleon pair $\sqrt{s_{_{\rm NN}}}=8.16$ TeV, using the ALICE detector at the CERN LHC. The production of $\psi(2{\rm S})$ is studied at forward ($2.03 < y_{\rm cms} < 3.53$) and backward ($-4.46 < y_{\rm cms} < -2.96$) centre-of-mass rapidity and for transverse momentum \pt $<$ 12 \gevc via the decay to muon pairs. In this paper, we report the integrated as well as the $y_{\rm cms}$- and $p_{\rm T}$-differential inclusive production cross sections. Nuclear effects on $\psi(2{\rm S})$ production are studied via the determination of the nuclear modification factor that shows a strong suppression at both forward and backward centre-of-mass rapidities. Comparisons with corresponding results for inclusive J$/psi$ show a similar suppression for the two states at forward rapidity (p-going direction), but a stronger suppression for $\psi(2{\rm S})$ at backward rapidity (Pb-going direction). As a function of $p_{\rm T}$, no clear dependence of the nuclear modification factor is found. The relative size of nuclear effects on $\psi(2{\rm S})$ production compared to J$/psi$ is also studied via the double ratio of production cross sections $[\sigma_{\psi({\rm 2S})}/\sigma_{{\rm J/}\psi}]_{\rm pPb}/[\sigma_{\psi({\rm 2S})}/\sigma_{{\rm J/}\psi}]_{\rm pp}$ between \ppb and \pp collisions. The results are compared with theoretical models that include various effects related to the initial and final state of the collision system and also with previous measurements at $\sqrt{s_{_{\rm NN}}}=5.02$ TeV.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992271661758423, "perplexity": 1418.516196500127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00321.warc.gz"}
http://mycbseguide.com/questions/1307/
How to derive  speed = wavelength * frequency Posted by Tanveer Singh (Jan 12, 2017 12:34 p.m.) (Question ID: 1307) • As we know, speed= Distance travelled/Time taken Suppose a wave travels a distance $\lambda$ which is its wavelength in time T, $\lambda$ / T Here T is the time taken by one wave. 1/T becomes the number of waves per second and is known as frequency of the wave. Thus v = $\lambda$ x f i.e. Speed = Wavelength x Frequency Answered by Shweta Gulati (Jan 17, 2017 12:25 a.m.) Thanks (0)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9780164957046509, "perplexity": 3338.4344884620264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108709.89/warc/CC-MAIN-20170821133645-20170821153645-00278.warc.gz"}
http://old.fieldtriptoolbox.org/faq/mtmconvol
With the mtmconvol method of ft_freqanalysis we effectively convolve the data with a complex wavelet, but computationally it is a little bit more involved: The wavelet is constructed by time-point wise multiplying the (real) cosine and (imaginary) sine component at each frequency with the specified tapering function. When using a Gaussian taper, this results in a Morlet wavelet. The Hanning taper that we often use has the practical advantage that the temporal spread is fully confined to the specified taper length (time window of interest), whereas with a Gaussian taper (which is infinitely wide) the taper needs to be truncated. Following the construction of the taper, both the data and tapered wavelet are Fourier transformed and element-wise multiplied in the frequency domain, after which the inverse Fourier transform is computed. By virtue of the Convolution theorem, this effectively results in a convolution of the complex wavelet with the data, but is computationally more efficient in case multiple tapers are employed (as the data only needs to be Fourier transformed once). The same approach can also be used with multiple tapers, such as the DPSS sequence. This results in robust multitaper spectral estimates of power as a function of time, smoothed over a well-controlled frequency range. Since the implemented method can be used either with a single taper of choice or with multiple tapers, we have dubbed it “mtmconvol” (multi-taper-method convolution), similar to “mtmfft” (multi-taper method fast Fourier transform).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334046006202698, "perplexity": 1187.457472393201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499439.6/warc/CC-MAIN-20191207132817-20191207160817-00145.warc.gz"}
https://www.physicsforums.com/threads/continuity-of-a-function.183752/
# Homework Help: Continuity of a function 1. Sep 10, 2007 ### ELESSAR TELKONT My problem is this. Let $$f:\mathbb{R}^{2}\longrightarrow \mathbb{R}^{2}$$ be a continuous function that satifies that $$\forall q\in\mathbb{Q}\times\mathbb{Q}$$ we have $$f(q)=q$$. Proof that $$\forall x\in\mathbb{R}^{2}$$ we have $$f(x)=x$$. I have worked out that because it is continuous, $$f$$ satisfies that $$\forall \epsilon>0\exists\delta>0\mid \forall x\in B_{\delta}(a)\longleftrightarrow f(x)\in B_{\epsilon}(f(a))$$ and then $$\forall q\in\mathbb{Q}\times\mathbb{Q}$$ we have $$\forall \epsilon>0\exists\delta>0\mid \forall x\in B_{\delta}(q)\longleftrightarrow f(x)\in B_{\epsilon}(q)$$ therefore we have to proof that $$\forall x'\in\mathbb{R}^{2}$$ we have $$\forall \epsilon>0\exists\delta>0\mid \forall x\in B_{\delta}(x')\longleftrightarrow f(x)\in B_{\epsilon}(x')$$. It's obvious that every element of $$\mathbb{R}^{2}$$ could be approximated by some element of $$\mathbb{Q}\times\mathbb{Q}$$ or sequence in this. But, how I can link this in an expression to get what I have to proof? 2. Sep 10, 2007 ### TimNguyen I'm not so sure about this but do we not know how the function maps irrational numbers, such as sqrt(2)? 3. Sep 10, 2007 ### CompuChip TimNguyen, in fact we know (they are mapped to themselves, as f is the identity map) but this is exactly what Elessar Telkont wants to show. Indeed you got the idea right: any real number can be approximated by a sequence of rational numbers (and therefore, pairs of reals can be approximated by pairs of rationals). What I would do is: Try to make this process of approximation precise (describe it in terms of epsilon-delta). Now assume what you want to prove is not true, then this should give a contradiction with the continuity (which you have also written out in epsilon-delta). I will take a look and post it more precisely later on (first, you give it a try yourself) 4. Sep 10, 2007 ### CompuChip Hmm, it was much easier. It is a familiar fact (or otherwise you should be able to easily prove it from the definition) that for continuous functions f, it holds that $$\lim_{n \to \infty} f(x_n) = f(\lim_{n \to \infty} x_n)$$ for a sequence $$(x_n)_{n \in \mathbb{N}}$$. So describe a real pair x as the (coordinate-wise) limit of a sequence $x_n$ of rational pairs, then $$f(x) = f(\lim_{n \to \infty} x_n) \stackrel{*}{=} \lim_{n \to \infty} f(x_n) = \lim_{n \to \infty} x_n = x,$$ where the identity marked with a star holds by the continuity of f -- QED. For completeness, let me prove the claim about the limits (it's a nice exercise in epsilon-delta proofs, so you might want to try it yourself first): Let $\epsilon > 0$. Since f is continuous, there is some $\delta$ such that $|| x - x_n || < \delta$ implies that $|| f(x_n) - f(x) || < \epsilon$. Now $x_n$ converging to x means that for this $\delta$ I can find an $N$ such that $|| x_n - x || < \delta$ as long as $n > N$. So, through the $\delta$ from the definition of continuity, I have found an $N$ for my $\epsilon$ such that $n > N$ implies $|| f(x_n) - f(x) || < \epsilon$, in other words, $$\lim_{n \to \infty} f(x_n) = f(x) = f( \lim_{n \to \infty} x )$$. Last edited: Sep 10, 2007 5. Sep 10, 2007 ### TimNguyen Sorry about that. My math is extremely rusty since I started graduate school in physics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984474241733551, "perplexity": 268.33585821745504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593142.83/warc/CC-MAIN-20180722080925-20180722100925-00582.warc.gz"}
http://math.stackexchange.com/users/8365/kuch-nahi?tab=summary
kuch nahi Reputation 2,924 Top tag Next privilege 3,000 Rep. 2 20 56 Impact ~108k people reached ### Questions (78) 46 How to understand and appreciate the prime number industry? 26 Math behind rotation in MS Paint 21 Do other non-native english speakers have trouble solving certain kinds of problems? 18 Evaluate $\sum\limits_{k=1}^n k^2$ and $\sum\limits_{k=1}^n k(k+1)$ combinatorially 17 Can I ever go wrong if I keep thinking of derivatives as ratios? ### Reputation (2,924) +5 If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t +5 Can I ever go wrong if I keep thinking of derivatives as ratios? +5 What is the reason for these jiggles when truncating infinite series? +5 Why is closure omitted in some group definitions? 29 Results that were widely believed to be false but were later shown to be true 8 How to calculate $\int \sqrt{(\cos{x})^2-a^2} \, dx$ 7 How to do this approximation? 7 What could be an intuitive explanation for $\sum\limits_{k=1}^{\infty}\frac{1}{k2^k} = \log 2$? 7 Prove that $\sum\limits_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$? ### Tags (73) 13 combinatorics × 20 8 discrete-mathematics × 2 13 calculus × 11 8 elliptic-functions 10 sequences-and-series × 6 8 indefinite-integrals 10 approximation × 3 7 summation × 2 8 limits × 5 6 radicals ### Accounts (32) Mathematics 2,924 rep 22056 Physics 1,844 rep 12137 TeX - LaTeX 1,504 rep 31835 Stack Overflow 1,451 rep 32048 Ask Ubuntu 681 rep 41737
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388350248336792, "perplexity": 2881.5851896815598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826736.89/warc/CC-MAIN-20160723071026-00139-ip-10-185-27-174.ec2.internal.warc.gz"}
https://documen.tv/question/sam-rolls-a-fair-dice-and-flips-a-fair-coin-what-is-the-probability-of-obtaining-an-odd-number-a-24208270-56/
## Sam rolls a fair dice and flips a fair coin. What is the probability of obtaining an odd number and a head? Question Sam rolls a fair dice and flips a fair coin. What is the probability of obtaining an odd number and a head? in progress 0 2 months 2021-07-24T14:42:21+00:00 1 Answers 1 views 0 1/4 Step-by-step explanation: Given an event A and event B, the probability of both happening can be determined by multiplying the two individual probabilities. We can start by finding the individual probabilities The probability of rolling an odd number is 3/6 = 1/2, as there are 3 odd numbers (1,3,5) on a die and 6 possibilities total. Furthermore, there is a equal chance of rolling each number as it is a fair die The probability of flipping a head is 1/2, as there are two equal possibilities (heads and tails), with one of them being heads To find the probability of both happening, we can multiply them to get (1/2) * (1/2) = 1/4
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428629636764526, "perplexity": 314.7288194349123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00387.warc.gz"}
https://brilliant.org/problems/a-geometry-problem-by-ashwin-korade/
# A geometry problem by ashwin korade Geometry Level 1 In the given figure, O is the center of the circle. Find the measure of angle PQR given angle POR=120. × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639464616775513, "perplexity": 2490.8949583695317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198652.6/warc/CC-MAIN-20200920192131-20200920222131-00310.warc.gz"}
https://ai.stackexchange.com/questions/2874/can-neural-networks-efficiently-solve-the-traveling-salesmen-problem/6219
# Can neural networks efficiently solve the traveling salesmen problem? Can neural networks efficiently solve the traveling salesmen problem? Are there any research papers that show that neural networks can solve the TSP efficiently? The TSP is an NP-hard problem, so I suspect that there are only approximate solutions to this problem, even with neural networks. So, in this case, how would efficiency be defined? In this context, it seems that the time efficiency may be obtained by resource inefficiency: by making the neural network enormous and simulating all the possible worlds, then maximizing. So, while time to compute doesn't grow much as the problem grows, the size of the physical computer grows enormously for larger problems; how fast it computes is then, it seems to me, not a good measure of the efficiency of the algorithm in the common meaning of efficiency. In this case, the resources themselves only grow as fast as the problem size, but what explodes is the number of connections that must be built. If we go from 1000 to 2000 neurons to solve a problem twice as large and requiring exponentially as much time to solve, the algorithms requiring only twice as many neurons to solve in polynomial time seem efficient, but, really, there is still an enormous increase in connections and coefficients that need be built for this to work. Is my above reasoning incorrect? To the best of my knowledge, there isn't any difference between the algorithmic methods and the NN methods. Those that can solve in polynomial time do not give a precise solution. Those that do give a precise solution do not solve in polynomial time. Of those that give a precise solution, the fastest takes $$2^N$$, but it blows up in terms of memory. The fastest good algorithm I believe is Concorde.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779942989349365, "perplexity": 362.83055449560703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00495.warc.gz"}
https://www.physicsforums.com/threads/pde-separating-variables-for-3d-spherical-wave-equation.550506/
# Homework Help: PDE Separating Variables for 3d spherical wave equation 1. Nov 14, 2011 ### King Tony 1. The problem statement, all variables and given/known data Just need someone else to double check more work. I just want to know if I'm separating these variables correctly. 2. Relevant equations $$\frac{\partial^2u}{\partial t^2} = c^2\nabla^2u$$ 3. The attempt at a solution Allow $u(\rho, \theta, \phi, t) = T(t)\omega(\rho, \theta, \phi)$ where $\rho$ is the radius, $\theta$ is the cylindrical angle and $\phi$ is the azimuthal angle. Then, separating the time dependent variable is as follows, $$\frac{T''(t)}{c^2T(t)} = \frac{\nabla^2\omega}{\omega} = -\lambda$$ From this, I know the time dependent ODE. The problem I'm having is with the spacial variable separation. Now we have, $$\nabla^2\omega = -\lambda\omega$$ Allow $\omega(\rho, \theta, \phi) = P(\rho)\Theta(\theta)\Phi(\phi)$, then we get $$\frac{\Theta\Phi}{\rho^2}\frac{d}{d\rho}(\rho^2 \frac{dP}{d\rho}) + \frac{P\Theta}{\rho^2sin\phi}\frac{d}{d\phi}(sin \phi \frac{d\Phi}{d\phi}) + \frac{P\Phi}{\rho^2sin^2\phi}\frac{d^2\Theta}{d \theta^2} + \lambda P\Theta\Phi = 0$$ By dividing by $\frac{P\Theta\Phi}{\rho^2sin^2 \phi}$, we can isolate the theta variable and move it to the other side of the equation, this introduces a seperation constant, $\mu$. We get: $$\frac{sin^2 \phi}{P}\frac{d}{d\rho}(\rho^2 \frac{dP}{d\rho}) + \frac{sin \phi}{\Phi}\frac{d}{d\phi}(sin \phi \frac{d\Phi}{d\phi}) + \lambda\rho^2 sin^2 \phi = -\frac{d^2\Theta}{d \theta^2} = \mu$$ Solving the theta ODE (with periodic BC) gives $\mu = m^2, m = 0, 1, 2, ...$ and we can move on to the next step, namely, finding our ODEs for rho and phi. $$\frac{sin^2 \phi}{P}\frac{d}{d\rho}(\rho^2 \frac{dP}{d\rho}) + \frac{sin \phi}{\Phi}\frac{d}{d\phi}(sin \phi \frac{d\Phi}{d\phi}) + \lambda\rho^2 sin^2 \phi - m^2 = 0$$ Divide by $sin^2 \phi$ and shuffle equations to get the rho and phi dependent ODEs with seperation constant $\nu$ $$\frac{1}{P}\frac{d}{d\rho}(\rho^2 \frac{dP}{d\rho}) + \lambda\rho^2 = -\frac{1}{sin \phi \Phi}\frac{d}{d\phi}(sin \phi \frac{d\Phi}{d\phi}) + \frac{m^2}{sin^2 \phi} = \nu$$ Finally, we end up with our ODEs for rho and phi, $$\frac{d}{d\rho}(\rho^2 \frac{dP}{d\rho}) + (\lambda\rho^2 - \nu)P = 0$$ $$\frac{d}{d\phi}(sin \phi \frac{d\Phi}{d\phi}) + (-\nu sin \phi - \frac{m^2}{sin \phi})\Phi = 0$$ I have a couple questions about this, it seems that I have a couple signs mixed up (compared to my book (Haberman)) and I don't know if I have done this entirely correctly. I greatly value your responses. Thank you! - Tony
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119336605072021, "perplexity": 622.0623695959312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511206.38/warc/CC-MAIN-20181017174543-20181017200043-00245.warc.gz"}
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1199557/?tool=pubmed
• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Nucleic Acids Res. 2005; 33(15): 4987–4994. Published online Sep 6, 2005. PMCID: PMC1199557 # The Gumbel pre-factor k for gapped local alignment can be estimated from simulations of global alignment ## Abstract The optimal gapped local alignment score of two random sequences follows a Gumbel distribution. The Gumbel distribution has two parameters, the scale parameter λ and the pre-factor k. Presently, the basic local alignment search tool (BLAST) programs (BLASTP (BLAST for proteins), PSI-BLAST, etc.) use all time-consuming computer simulations to determine the Gumbel parameters. Because the simulations must be done offline, BLAST users are restricted in their choice of alignment scoring schemes. The ultimate aim of this paper is to speed the simulations, to determine the Gumbel parameters online, and to remove the corresponding restrictions on BLAST users. Simulations for the scale parameter λ can be as much as five times faster, if they use global instead of local alignment [R. Bundschuh (2002) J. Comput. Biol., 9, 243–260]. Unfortunately, the acceleration does not extend in determining the Gumbel pre-factor k, because k has no known mathematical relationship to global alignment. This paper relates k to global alignment and exploits the relationship to show that for the BLASTP defaults, 10000 realizations with sequences of average length 140 suffice to estimate both Gumbel parameters λ and k within the errors required (λ, 0.8%; k, 10%). For the BLASTP defaults, simulations for both Gumbel parameters now take less than 30 s on a 2.8 GHz Pentium 4 processor. ## INTRODUCTION Local sequence alignment is an indispensable computational tool in modern molecular biology. It is frequently used to infer the functional, structural and evolutionary relationships of a novel protein or DNA sequence by finding similar sequences of known function in a database. Arguably, the most important sequence database search program available is BLAST (the Basic Local Alignment Search Tool) (1,2). Using a heuristic algorithm, BLAST implicitly performs a local alignment of a protein or DNA query against sequences in the corresponding database. The BLAST output then ranks each potential database match according to an E-value, which is derived from the corresponding local maximum score, given in bits. For each local maximum score y, the corresponding E-value Ey gives (under a random model) the expected number of false positives with a lower rank in the output. Thus, a small E-value indicates that the corresponding alignment is unlikely to occur by chance alone, whereas a large E-value indicates an unremarkable alignment. Without doubt, BLAST's E-values contribute substantially to its popularity. Let us discuss the BLAST E-value Ey further here. (The Materials and Methods section also continues the discussion.) BLAST assumes a random model in which each unrelated pair of sequences A[1, m] = A1 ··· Am and B[1, n] = B1 ··· Bn consists of random letters chosen independently from a background distribution. BLASTP (BLAST for proteins), e.g. assumes that random proteins are composed of amino acids chosen independently from the Robinson and Robinson frequency distribution (3). BLAST also requires an input, a matrix s(Ai, Bj) for scoring matches between the letters Ai and Bj. BLASTP, e.g. uses the BLOSUM62 scoring matrix (4) as its default, offering as alternatives a few other PAM (5) and BLOSUM matrices. BLAST also enhances its detection of remote sequence similarities by using gapped sequence alignment. The cost of introducing a gap into an alignment is given by the ‘gap penalty’ Δ(g), where g is the gap length. Practical gap penalties Δ are usually super-additive, i.e. Δ(g) + Δ(h)≥Δ(g + h), so the concatenation of optimal subsequence alignments has a score no less than the sum of their scores. (However, our theory is not restricted to super-additive gap penalties). Affine gap penalties Δ(g) = a + bg are typical in database searches. We refer to the letter distribution, the scoring matrix, and gap penalty collectively as ‘BLAST parameters’. Throughout the paper, we assume a ‘logarithmic regime’ (6) where the alignment scores of long random sequences have a negative expectation. In the logarithmic regime, the BLAST E-value Ey is approximately $Ey≈kmne−λy$ 1 for large y. Under a Poisson approximation (7) for large y, the E-value Ey yields the P-value Py = 1−exp(−Ey). Because of Equation 1, the tail probability Py corresponds to a Gumbel distribution with ‘scale parameter’ λ and ‘pre-factor’ k. For ungapped local alignment (i.e. the special case Δ(g) = ∞, which disallows gaps in the optimal local alignment), a rigorous theory furnishes analytic formulas for the Gumbel parameters λ and k (7,8). For gapped local alignment, analytic results are scarce and usually come at a price: they depend on approximations whose accuracy in general is unknown (912). In the absence of a rigorous theory for gapped local alignment, computer simulations have confirmed the validity of Equation 1 (1316), and in the absence of formulas, they also have provided estimates of λ and k (1619). Because of the exponentiation in Equation 1, errors in λ have a greater practical impact than errors in k. Thus, for use in BLAST, λ must be known to within 1–4% relative error; k, to within 10% (20). Therefore, in statements about computational speed, the following implicitly assumes that the estimation of λ and k is carried out to these accuracies, unless stated otherwise. Presently, the BLAST program precomputes λ and k offline, using the so-called ‘island method’ (15,20). Because of the precomputation, users are given a narrow choice indeed of BLAST parameters. The choice of BLAST parameters would be much less restricted, if λ and k could be computed online (in, say, less than 1 s) before searching a database with arbitrary BLAST parameters. Accordingly, much recent research has been directed toward speeding estimation of λ and k. With the ultimate aim of estimating λ and k online, Bundschuh gave some interesting conjectures about λ (21,22). He then applied them in global alignment simulations that estimated λ as much as five faster than the island method. Later, we extended his conjectures, reducing the sequence length required to estimate λ by almost a factor of 10 (23). Despite their obvious promise, even with further improvements in speed and global alignment simulations will remain impractical for online estimation in BLAST, unless they can be made to estimate k as well. To remedy the problem, we relate k to global alignment and then exploit the relationship in simulations that estimate both λ and k. ## MATERIALS AND METHODS ### Notation for global sequence alignment We denote the non-negative integers by + = {0, 1, 2, 3,…}. Throughout the paper, the letters g, h, i, j, m, n and the letter y are the integers. Consider a pair A = A1A2… and B = B1B2… of infinite sequences. The corresponding global alignment graph Γ is a directed and weighted lattice graph in two dimensions, as follows. The vertices of Γ are $v=(i,j)∈ℤ+2$, the non-negative two-dimensional integer lattice. Three sets of directed edges e come out of each vertex v = (i, j): northward, northeastward and eastward. One northeastward edge goes into (i + 1, j + 1) with weight s(Ai+1, Bj+1). For each g > 0, one eastward edge goes into (i + g, j) and one northward edge goes into (i,j + g); both are assigned the same weight −Δ(g) < 0. For simplicity, we assume s(Ai, Bj) and Δ(g) are always integers, with greatest common divisor 1. A directed path π = (v0, e1, v1, e2,…eh, vh) in Γ is a finite, alternating sequence of vertices and edges that starts and ends with a vertex. We say that the path π starts at v0 and ends at vh. For instance, each gapped alignment of the subsequences A[i + 1, m] = Ai+1Am and B[j + 1, n] = Bj+1Bn corresponds to exactly one directed path that starts at v0 = (i, j) and ends at vh = (m, n). The alignment's score is the ‘path weight’ $Wπ=∑i=1hW(ei)$, the sum of the weights W(ei) of the edges ei. By convention, any trivial path π = (v0) consisting of a single vertex has weight Wπ = 0. Let Πij be the set of all paths π starting at v0 = (0, 0) and ending at vh = (i, j). Define the ‘global score’ Sij = max{Wπ: π Πij}. The paths π starting at v0 and ending at vh with weight Wπ = Sij are ‘optimal global paths’ and correspond to ‘optimal global alignments’ between A[1, i] and B[1, j]. The Needleman–Wunsch algorithm computes the global scores Sij (24). Let $Π=∪(i,j)∈ℤ+2Πij$ be the set of all paths π starting at v0 = (0,0). Define the ‘global maximum’ M = max{Wπ: π Π}, which is also the maximum $M=max{Sij:(i,j)∈ℤ+2}$ of all global scores. Let $N(y)=#{(i,j)∈ℤ+2:Sij=y}$ denote the number of vertices with global score y. Define the lattice rectangle [0, n] = {0,1,…,n}. Our simulations involved a square subset [0,n]2 of $ℤ+2$. In particular single subscripts connote quantities for the square: Mn = max{Sij:(i, j) [0, n]2}, the square's global maximum; En = max{max0≤inSin, max0≤jnSnj}, its edge maximum; and Nn (y) = #{(i, j) [0, n]2:Sij = y}, the number of its vertices with global score y. ### The formula for k from global alignment We can show heuristically that k = limy→∞ky, where $ky=eλy1−e−λ·ℙ(M=y)2𝔼N(y)$ 2 (see our Appendix, online). Ultimately, the heuristics behind Equation 2 are based on two observations about random sequence matches. First, the two ends of a strong local alignment match are the mirrors of each other. Second, the right end of a strong alignment match looks the same for both local and global alignment. Equation 2 computes ky from three components: the scale parameter λ, the probability P(M = y) of a global maximum y, and the expected number N(y) of vertices with global score Sij = y. We now describe how our simulations determined the three components. ### Numerical scheme for λ First, we estimated λ from random global alignments (23). All simulations used to affine gap penalties Δ(g) = a + bg and the corresponding global alignment algorithms for computing Sij (25). Recall the edge maximum En (defined at the end of the notation for global sequence alignment). As shown elsewhere (23), its cumulant generating function satisfies $ln[𝔼exp(λEn)]=β0+β1(λ)n+O(δn),$ 3 where 0 ≤ δ < 1. The root $λ=λ^$ of β1(λ) = 0 is our estimate for λ. To estimate exp(λEn) efficiently, we used Bundschuh's importance sampling methods (21), which apply if the gap penalty is affine. Briefly, importance sampling is a variance-reduction technique for simulating rare events. In global alignment simulations, e.g. a large edge maximum is a rare event. By simulating optimal subsequence pairs in ‘hybrid alignment’ (a type of optimized Bayesian local alignment) (26), we ensured that our realizations frequently generated a large edge maximum En. Accordingly, we simulated a pair of sequences of some ‘base length’ n = l. After correcting for biases induced by the importance sampling distribution, we estimated exp(λEl). Equation 3 corresponds to an asymptotic equality with two free parameters to β0 and β1(λ), which we estimated with robust regression. Robust regression was originally developed as an antidote to outliers (27), which badly skew least-square regression (2831). As noted elsewhere (23), however, robust regression is also remarkably suited for extracting asymptotic parameters like β0 and β1(λ). Robust regression requires the specification of an influence function, to quantify the influence of potential outliers on the regression result. Many influence functions exist (27), but the Andrews function with a = 1.339 [(27), p. 388; (29)] works well in asymptotic regression, because it ignores points that obviously lie outside the asymptotic regime (23). Accordingly, we applied robust regression to Equation 3. To solve β1(λ) = 0, let λu be the scale parameter for ungapped local alignment, which can be determined analytically. Because 0 ≤ λ ≤ λu, with repeated bisection of the interval [0, λu] yielded an estimate $λ^$ for the root of the equation β1(λ) = 0. In practice, multiple roots did not occur. ### Numerical scheme for k Next, we estimated (M = y) and N(y). Importance sampling has already generated sequence-pairs of base length l for estimating λ. The bias in importance sampling tends to yield large global scores Sij, ascending toward the global maximum M. To determine N(y), we needed to simulate and count all vertices with global scores Sij = y. Therefore, we extended the sequence pair beyond the base length l using random letters with the unbiased Robinson and Robinson frequencies. The global scores Sij beyond the base length l became progressively smaller, thereby permitting determination of N(y). Given > 0, we simulated a random number $L¯$ of unbiased letters in each sequence, until we found some total length $L=l¯+L¯$ such that $(2L+1)exp{−λ(ML−EL)}≤ɛ.$ 4 The edge maximum EL is a maximum over 2L + 1 vertices. Therefore, for small enough stringencies > 0, if the edge maximum EL of the contributing 2L + 1 vertices satisfies Equation 4, it is probable that M = ML, because elongating the sequences is unlikely to increase the estimate of M. Similarly, the elongation does not increase the estimate of N(y) much. After appropriate averaging, our simulations therefore yielded estimates $ℙ^(M=y)≈ℙ(ML=y)$ and $𝔼^N(y)≈𝔼NL(y)$ for (M = y) and N(y). With the simulation estimates $λ^$, $ℙ^(M=y)$ and $𝔼^N(y)$ in hand, we found that errors in $λ^$ were negligible in practice. In contrast, the standard deviations sample (32) of $ℙ^(M=y)$ and $𝔼^N(y)$, denoted by sM and sN, were not. We calculated an estimate $k^y$ for ky by substituting $λ^$, $ℙ^(M=y)$, and $𝔼^N(y)$ into Equation 2. We estimated the error $s(k^y)$ in $k^y$ from the equation $s(k^y)=max|eλ^y1−e−λ^·[ℙ^(M=y)±sM]2𝔼^[N(y)]±sN−k^y|.$ 5 Note that Equation 5 explicitly neglects the error in the estimate $λ^$. Finally, we used robust regression to extract a summary estimate $k^$ from the estimates $k^y±s(k^y)$ for individual y. To begin with, consider a constant regression model η = 1α + e, where η is a column vector consisting of the values $k^y$, 1 is a column vector whose elements are all 1, the constant α is the summary estimate $k^$, and e is the column vector consisting of the errors $s(k^y)$. Our ultimate aim is to compute $k^$ rapidly, with as few realizations as possible. Unfortunately, for small numbers of realizations, the errors sM and sN are correlated with the corresponding estimates $ℙ^(M=y)$ and $𝔼^N(y)$. The correlations propagate to $s(k^y)$, noticeably biasing the summary estimate $k^$, with $𝔼k^ (see Figure 1). Plot of estimates for $k^y$ against the global score y for the BLOSUM62 scoring matrix with an affine gap cost of 11 + g for a gap of length g, with random sequences whose letters are chosen according to the empirical Robinson and Robinson amino acid frequencies ... To avoid the bias, we applied the constant regression model η′ = 1α′ + e′ to the errors $s(k^y)$ themselves. The elements of the column vector η′ were the errors $s(k^y)$, with errors in each $s(k^y)$ is taken to be a constant s derived though a standard formula [(27), p. 387], e′ = 1s. Robust regression thus gave a constant estimate $α′=s^(k^)$ of the errors $s(k^y)$. We substituted the constant error estimate $e=1α′=1s^(k^)$ back into the constant regression η = 1α + e of $k^y$ to derive a robust regression estimate $k^$ for k. Although somewhat ad hoc, the constant regression of the errors successfully reduced biases (see Figure 3). Plot of relative errors of estimate k obtained via robust regression using $k^y±s^(k^)$ and $k^y±s(k^y)$ against different numbers of simulations. Each bar represents an average over 20 absolute relative errors. The previous best estimate ... Even for large simulations (e.g. 106 realizations), however, sampling of the event [M = y] was inadequate for many large y, with (M = y) likely being underestimated. Although the corresponding average was unbiased (in theory, at least), we suspect that it had a distribution whose skewing increased with y. Consequently, for large y, $k^y$ often slightly underestimated the true k, with improbable but substantial overestimations maintaining a correct expectation $𝔼k^y=k$ (see Figure 2). The putative skewing also made the anticipated relation (M = y) ≈ eλ (M = y + 1) fail for large y. To avoid skewing, we therefore restricted robust regression of $k^y$ to the range [a, b] of y that minimized the function $f(a,b)=1(b−a+1)∑y=ab|ℙ(M=y)ℙ(M≥y)−(1−e−λ)|.$ 6 Plot of estimates for $k^y$ against the global score y for 106 realizations. The simulation conditions were the same as in Figure 1. The error bars showing $s(k^y)$ for the under-sampled asymptotic regime y [41100] are large and are omitted. ... ### Software and Hardware Computer code was written in C++ and compiled with the Microsoft® Visual C++® 6.0 compiler. The computer had a single Intel® Pentium® 4 2.8 GHz processor with 0.5 GB RAM and employed the Microsoft® Windows® 2000 operating system. ## RESULTS Tables 1 and and22 give estimates of the Gumbel parameters λ and k for all online options of the BLASTP parameters. They therefore confirm that our simulations and our formulas for k produced correct results. Other figures show results for the BLASTP default parameters, namely, the Robinson and Robinson amino acid frequencies (3), the BLOSUM62 scoring matrix and the gap cost Δ(g) = 11 + g. Other BLAST parameters tested gave comparable results, unless indicated otherwise (data not shown). Estimates of λ for all online options of the BLASTP parameters Estimates of k for all online options of the BLASTP parameters Empirically, simulations using BLASTP default parameters needed a base length of l = 50 and a stringency = 10−2 for the accuracies required for (λ, 1%; k, 10%). For scoring matrices with more dominant diagonals than BLOSUM62, shorter base lengths sufficed, (e.g. for PAM30, l = 15 sufficed). Figure 1 plots the estimates $k^y$ with their standard error bars $s(k^y)$ against global score y, up to y = 25. Each point represents 30000 realizations. The horizontal thick line represents the previous best estimate k ≈ 0.041 and the dotted line, the biased summary estimate $k^=0.036$ due to the positive correlation between $k^y$ and $s(k^y)$. Therefore Figure 1 motivated us to regress the errors in $k^y$, to produce a constant error estimate $s^(k^)$, as described in the Materials and Methods. Figure 2 plots the estimates $k^y$ against global score y, up to y = 100. Each point represents 106 realizations. We obtained the estimate $λ^$ and used it to estimate $k^y$. The range y [0, 3] is not asymptotic, so the $k^y$ do not approximate the true k very well. The range y [4, 40] is asymptotic, and it is adequately sampled, so the $k^y$ fluctuate randomly around the true k. The range y > 40 is also asymptotic, but it is not adequately sampled, so the $k^y$ usually underestimate the true k. Figure 2 motivated us to regress only in the range [a, b] minimizing Equation 6, as described in the Materials and Methods. Figure 3 plots the relative errors of the summary estimate $k^$ using $k^y±s(k^y)$ (with skewed error estimates $s(k^y))$ and those using $k^y±s^(k^)$ (with constant error estimate $s^(k^))$ against different numbers of realizations). All errors in $k^$ were computed relative to the approximation k ≈ 0.041. Each error plotted is the average of the absolute relative error for 20 independent simulations, each using the indicated number of realizations. White bars show the results for $k^y±s^(k^)$; black bars, for $k^y±s(k^y)$. For 10000 realizations, the constant error estimate $s^(k^)$ reduces the relative errors dramatically. As the number of realizations increases, the difference in efficiency of estimation between $k^y±s(k^y)$ and $k^y±s^(k^)$ decreases. Figure 3 shows that 10000 realizations estimated k with less than 10% relative error. The same 10000 realizations also estimated $λ^$ with less than 0.8% relative error (data not shown). The simulations of Figure 3 estimated $k^$ from 10000 realizations, in less than 30 s. For comparison, the same simulations could have estimated $λ^$ in less than 7 s. For the PAM 30 matrix with Δ(g) = 9 + g, they estimated λ and k in less than 4 s. ## DISCUSSION BLAST programs (BLASTP, PSI-BLAST, etc.) are restricted to specific scoring schemes, because time-consuming local alignment simulations for estimating the corresponding Gumbel parameters must be done offline. However, simulations of global alignment can estimate the Gumbel scale parameter λ for local alignment (6). Some global alignment methods are as much as five times faster than the best local alignment methods (21,23), so global alignment has considerable potential for online estimation of the Gumbel parameter λ. This paper surmounts an obstacle to online estimation by demonstrating that simulations of global alignment can determine the Gumbel pre-factor k. Table 2 displays the results of global alignment simulations over a wide range of BLAST parameters, all of which gave correct estimates of the corresponding k and supported the validity of our methods for computing k. Global alignment simulation therefore appears a feasible method for estimating both Gumbel parameters, λ and k. (The BLASTP default parameters provide a standard for quantifying speed, so the following results apply to the BLASTP defaults, unless stated otherwise.) With local alignment, estimates of λ required 40000 sequence-pairs of minimum length 600 (21); with our methods, 5000 sequence-pairs of maximum length 50 (23). In fact, our methods attained 1.3% accuracies in λ with only 1000 sequence-pairs of maximum length 50. In our hands, k was more difficult to estimate than λ, with 10% relative errors requiring 10000 sequence-pairs of average length 140. In summary, the methods presented here for estimating the Gumbel parameters λ and k represent at least a 3-fold improvement in speed over local alignments. Online computation of the BLAST P-value requires more than the Gumbel parameters. It also requires an estimate of the ‘finite-size effect’ (10,13,33,34). Global alignment (or some variant of it) can indeed produce the required estimate (manuscript in preparation). Without the finite-size estimate in hand, however, we were not strongly motivated to incorporate technical improvements or heuristics into our methods. Bundschuh, e.g. implemented a diagonal-cutting heuristic to remove irrelevant off-diagonal elements in the global alignment matrix (21); we did not. The heuristic could probably speed our computation by a further factor of at least three. Online BLAST estimation of the Gumbel parameters is likely just a few years away. ## Acknowledgments We would like to acknowledge helpful discussions with Dr Ralf Bundschuh and Dr Stephen Altschul. This work was supported in whole by the Intramural Research Program of the National Library of Medicine at National Institutes of Health/DHHS. Funding to pay the Open Access publication charges for this article was provided by National Library of Medicine at National Institutes of Health/DHHS. Conflict of interest statement. None declared. #### APPENDIX In the Appendix, we give a heuristic derivation of Equation 2. ##### Notation for local sequence alignment For local alignment, consider a pair $A^=…A^−1A^0A^1…$ and $B^=…B^−1B^0B^1…$ of doubly-infinite sequences. Their local alignment graph $Γ^$ is a directed, weighted lattice graph in two dimensions, as follows. The vertices v of $Γ^$ are v = (i, j) 2, the entire two-dimensional integer lattice. In other respects, particularly with respect to the edges between its vertices, $Γ^$ has the same structure as the global alignment graph Γ. We base the graph $Γ^$ on the entire two-dimensional integer lattice 2 because of our interest in the Gumbel distribution. In intuitive terms, the BLAST E-value Ey follows the Gumbel distribution, only if the local alignment does not ‘see’ the ends of the sequences, so finite-size effects can be neglected (13,33). Let $Π^ij$ be the set of all paths π ending at vh = (i,j), regardless of their starting vertex. Define the ‘local score’ $S^ij=max{Wπ:π∈Π^ij}$. The paths π ending at vk = (i,j) with local score $Wπ=S^ij$ are ‘optimal local paths’ corresponding to ‘optimal local alignments’ matching subsequences of $A^$ and $B^$ up to and including the letters $A^i$ and $B^j$. Unlike the singly-infinite sequences A and B, the doubly-infinite sequences $A^$ and $B^$ correspond to the entire lattice 2. The lattice 2 is invariant under translation (i.e. it appears the same from each of its vertices). Thus, if $A^$ and $B^$ are sequences with independent random letters, the corresponding local scores $S^ij$ are ‘stationary’ (i.e. their joint distribution is invariant under translation). Stationary scores carry a prime elsewhere (i.e. $S^′ij$) (35), which we drop here for brevity. For many purposes, translation invariance renders all vertices in 2 equivalent, so it usually suffices to define quantities below solely at the origin, (0,0). The definition at other vertices is usually left implicit. If the sequences $A^$ and $B^$ were singly-infinite, the Smith–Waterman algorithm could compute the corresponding local scores $S^ij$ (36). Although the algorithm is unable to compute $S^ij$ for $A^$ and $B^$, a rigorous treatment shows that doubly-infinite sequences pose no essential difficulties in the logarithmic regime (35). For efficiency, many simulations of random local alignments partition the vertices in 2 into ‘islands’ (described below). To avoid technical nuisances, each vertex must belong to exactly one island, so we define the following strict total order on 2: (i′, j′) (i, j), if and only if either i′ + j′ < i + j or else, i′ + j′ = i + j and j′ < j. Let us say that a vertex $(i,j)∈ℤ+2$ ‘belongs to’ the origin if (0,0) is the greatest vertex v0 = (i′,j′) (under the total order ) such that $S^ij=Wπ$, for some path π starting at v0 = (i′,j′) and ending at vh = (i, j). The ‘island’ belonging to (0,0) is the set $𝔹00⊆ℤ+2$ of all vertices (i, j) belonging to (0, 0), and we say that (0, 0) ‘owns’ the island. [Equation 12 below uses the translate i,−j of the set 00, where i,−j is the set of all vertices belonging to (−i,−j)]. By the following reasoning, 00 is empty if and only if $S^00>0$. First, if $S^00>0$, there is some path π′ ending at (0, 0) with a positive score. If (0, 0) owned any vertex (i, j), there would be a path π starting at (0, 0) and ending at (i, j) with $S^ij=Wπ$. Then, the path concatenating π and π′ would have a weight exceeding $S^ij=Wπ$, contrary to the definition of $S^ij$. Thus, if (0,0) owns some vertex, $S^00=0$. Conversely, if $S^00=0$, then by deliberate construction, the definition of the total order implies that (0,0) owns itself [because the weight of the trivial path containing only (0,0) is 0]. Accordingly, define the ‘local maximum’ [implicitly, on the island 00 belonging to (0, 0)] as $M^=max{S^ij:(i,j)∈𝔹00}$, with the default $M^=−∞$, if 00 is empty (i.e. if $S^00>0$). Let $N^(y)=#{(i,j)∈𝔹00:S^ij=y}$ denote the number of island vertices with local score y. To connect our quantities explicitly to the Gumbel parameters, define $M^mn=max{S^ij:0≤i≤m,0≤j≤n}$, the maximum local score in the lattice rectangle [0, m] × [0, n]. Let ρy the density of islands yielding a local score $S^ij≥y$, or equivalently, the density of their owners in 2. Under certain conditions in the logarithmic regime, $ℙ(M^mn≥y)=Py≈1−exp(−Ey)$, where as m, n → ∞, $Ey=ρymn≈kmne−λy.$ 7 Simulations indicate that to a good approximation, islands yielding a large local score $S^ij$ occur independently of each other (15). Therefore, Equation 7 asserts that ρyke−λy. In a Poisson approximation, ρy represents the intensity of the Poisson process on 2 that generates the owners of islands yielding a local score $S^ij≥y$. Because of translation invariance, the density ρy equals the probability that any particular vertex in 2 [e.g. (0, 0)] owns an island yielding a local score $S^ij≥y$. In other words, $ℙ(M^≥y)=ρy≈ke−λy$. Thus, the limit $k=limy→∞eλyℙ(M^≥y)$ 8 exists and equals the pre-factor k. ##### Path reversal identity To determine k from global alignments, we first relate the global maximum M to the local scores $S^ij$ with a path reversal identity. Recall the global maximum M = max {Wπ Π}, where Π is the set of all paths π in $ℤ+2$ starting at v0 = (0, 0). Recall also the local score $S^ij=max{Wπ:π∈Π^ij}$, where $Π^ij$ is the set of all paths π in 2 ending at vh = (i, j). It is believable that for any fixed (i, j) 2, each path in $Π^ij$ with random edge-weights corresponds to a reversal of a path in Π with the same random edge-weights. Thus, for every (i, j) 2, $ℙ(S^ij=y)=ℙ(M=y)$, i.e. the local score and the global maximum have the same distribution. (Note: the equality is solely distributional. In any particular random instance, the local score $S^ij$ and global maximum score M are unlikely to be related.) Because the distributional equality holds for every (i, j) 2, we drop the subscript ij on $S^ij$ and write $ℙ(S^=y)=ℙ(M=y).$ 9 A formal proof of Equation 9 can be found elsewhere (35). ##### The Poisson clumping heuristic Consider the Poisson clumping heuristic (37) $ℙ(S^=y)=ℙ(M^≥y)𝔼[N^(y)|M^≥y].$ 10 Equation 10 states that at any fixed vertex (i, j) 2, the probability that $S^ij=y$ is the density of vertices with a local score y. This density equals $ρy=ℙ(M^≥y)$, the density of islands where some local score is at least y, multiplied by $𝔼[N^(y)|M^≥y]$, the expected number $N^(y)$ of island vertices (i,j) where the local score $S^ij=y$, is given $M^≥y$. Equation 10 can be demonstrated as follows. First, $𝔼N^(y)=ℙ(M^≥y)𝔼[N^(y)|M^≥y],$ 11 because if $M^, then $N^(y)=0$. Equation 11 follows, because the event $[M^ contributes nothing to the expectation on the left. Next, define the indicator A = 1 if the event A occurs and A = 0 otherwise. Then, 12 The first equality is essentially the definition of $N^(y)$, which counts the number of vertices belonging to (0,0) with local score $S^ij=y$. The second equality exploits the translation invariance of the probabilities associated with $S^ij$. The third inequality merely notes that in the logarithmic regime, (0,0) must belong to some vertex (35). Equation 10 follows. ##### Our speculations Based on the success of our simulation results, we speculate. First, $limy→∞𝔼[N^(y)|M^≥y]𝔼[N(y)|M≥y]=1.$ 13 In fact, $limy→∞𝔼[N^(y)|M^≥y]$ and $limy→∞𝔼[N(y)|M≥y]$ are likely to exist as a common finite limit, but Equation 13 suffices for present purposes. Equation 13 can be justified intuitively, as follows. As y→∞, any vertices satisfying Sij = y become likely to cluster on a single island that has a large maximum local score. Thus, given My, the vertices with Sij = y have a comparable structure to vertices with $S^ij=y$ on the island belonging to (0, 0), given that the island satisfies $M^≥y$. In particular, given My, the number N(y) of vertices with Sij = y has a similar random behaviour to the number $N^(y)$ of vertices with $S^ij=y$, given $M^≥y$. Thus, the expectations approximate each other: $𝔼[N^(y)|M^≥y]≈𝔼[N(y)|M≥y]$. Though hardly a ‘speculation’, we assume that c = limy→∞ eλy (My) exists. Unfortunately, still there is no rigorous proof of the limit's existence. ##### The formula for k from global alignment Equation 11 has an analog for global alignment, with a similar demonstration: $𝔼N(y)=𝔼[N(y)|M≥y]ℙ(M≥y).$ 14 Together, Equations 810, 13 and 14 yield $k=limy→∞eλyℙ(M^≥y)=limy→∞eλyℙ(M=y)𝔼[N^(y)|M^≥y]=limy→∞eλyℙ(M=y)𝔼[N(y)|M≥y]=limy→∞eλyℙ(M=y)ℙ(M≥y)𝔼N(y).$ 15 Recall our assumption that s(Ai,Bj) and Δ(g) are always integers: $limy→∞ℙ(M=y)ℙ(M≥y)=limy→∞ℙ(M≥y)−ℙ(M≥y+1)ℙ(M≥y)=1−e−λ.$ 16 Let $ky=eλyℙ(M^≥y)$. From Equations 15 and 16, k = limy→∞ky, where ky is given by Equation 2. ## REFERENCES 1. Altschul S.F., Gish W., Miller W., Myers E.W., Lipman D.J. Basic local alignment search tool. J Mol. Biol. 1990;215:403–410. [PubMed] 2. Altschul S.F., Madden T.L., Schaffer A.A., Zhang J., Zhang Z., Miller W., Lipman D.J. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 1997;25:3389–3402. [PubMed] 3. Robinson A.B., Robinson L.R. Distribution of glutamine and asparagine residues and their near neighbors in peptides and proteins. Proc. Natl Acad. Sci. USA. 1991;88:8880–8884. [PubMed] 4. Henikoff S., Henikoff J.G. Amino acid substitution matrices from protein blocks. Proc. Natl Acad. Sci. USA. 1992;89:10915–10919. [PubMed] 5. Dayhoff M.O., Schwartz R.M., Orcutt B.C. Atlas of Protein Sequence and Structure. Vol. 3. Silver Spring, MD: National Biomedical Research Foundation; 1978. pp. 345–352. 6. Arratia R., Waterman M.S. A phase transition for the score in matching random sequences allowing deletions. Ann Appl Probab. 1994;4:200–225. 7. Dembo A., Karlin S., Zeitouni O. Limit distributions of maximal non-aligned two-sequence segmental score. Ann Probab. 1994;22:2022–2039. 8. Karlin S., Altschul S.F. Methods for assessing the statistical significance of molecular sequence features by using general scoring schemes. Proc. Natl Acad. Sci. USA. 1990;87:2264–2268. [PubMed] 9. Mott R. Local sequence alignments with monotonic gap penalties. Bioinformatics. 1999;15:455–462. [PubMed] 10. Mott R. Accurate formula for P-values of gapped local sequence and profile alignments. J Mol. Biol. 2000;300:649–659. [PubMed] 11. Storey J.D., Siegmund D. Approximate p-values for local sequence alignments: numerical studies. J Comput. Biol. 2001;8:549–556. [PubMed] 12. Siegmund D., Yakir B. Approximate p-values for local sequence alignments. Ann Stat. 2000;28:657–680. 13. Altschul S.F., Gish W. Local alignment statistics. Methods Enzymol. 1996;266:460–480. [PubMed] 14. Waterman M.S., Vingron M. Rapid and accurate estimates of statistical significance for sequence data base searches. Proc. Natl Acad. Sci. USA. 1994;91:4625–4628. [PubMed] 15. Olsen R., Bundschuh R., Hwa T. Rapid assessment of extremal statistics for gapped local alignment. Proc. Int. Conf. Intell Syst Mol Biol. 1999:211–222. [PubMed] 16. Mott R. Maximum-Likelihood-Estimation of the Statistical Distribution of Smith-Waterman Local Sequence Similarity Scores. B Math Biol. 1992;54:59–75. 17. Smith T.F., Waterman M.S., Burks C. The statistical distribution of nucleic acid similarities. Nucleic Acids Res. 1985;13:645–656. [PubMed] 18. Collins J.F., Coulson A.F., Lyall A. The significance of protein sequence similarities. Comput. Appl. Biosci. 1988;4:67–71. [PubMed] 19. Mott R., Tribe R. Approximate statistics of gapped alignments. J Comput. Biol. 1999;6:91–112. [PubMed] 20. Altschul S.F., Bundschuh R., Olsen R., Hwa T. The estimation of statistical parameters for local alignment score distributions. Nucleic Acids Res. 2001;29:351–361. [PubMed] 21. Bundschuh R. Rapid significance estimation in local sequence alignment with gaps. J Comput. Biol. 2002;9:243–260. [PubMed] 22. Grossmann S., Yakir B. Large deviations for global maxima of independent superadditive processes with negative drift and an application to optimal sequence alignments. Bernoulli. 2004;10:829–845. 23. Park Y., Sheetlin S., Spouge J.L. Accelerated convergence and robust asymptotic regression of the Gumbel scale parameter for gapped sequence alignment. Journal of Physics A: MATHEMATICAL AND GENERAL. 2005;38:97–108. 24. Needleman S.B., Wunsch C.D. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol. Biol. 1970;48:443–453. [PubMed] 25. Gotoh O. An improved algorithm for matching biological sequences. J Mol. Biol. 1982;162:705–708. [PubMed] 26. Yu Y.K., Hwa T. Statistical significance of probabilistic sequence alignment and related local hidden Markov models. J Comput. Biol. 2001;8:249–282. [PubMed] 27. Montgomery D.C., Peck E.A., Vining G.G. Introduction to Linear Regression Analysis. NY: John Wiley & Sons, Inc.; 2001. 28. Andrews D.F., Bickel P.J., Hampel F.R., Huber P.J., rogers W.H., Tukey K.W. Robust Estimates of Location: Survey and advances. Princenton, NJ: Princenton University Press; 1972. 29. Andrews D.F. A robust method for multiple linear regression. Technometrics. 1974;16:523–531. 30. Huber P.J. Robust estimation of a location parameter. Ann. Math. Statist. 1964;35:73–101. 31. Huber P.J. Robust regression: Asymptotics, conjectures and Monte Carlo. Ann Stat. 1973;1:799–821. 32. Dwass M. Probability and Statistics. NY: W.A. Benjamin; 1970. 33. Spouge J.L. Finite-size corrections to Poisson approximations of rare events in renewal processes. J Appl Probab. 2001;38:554–569. 34. Spouge J.L. Finite-Size Corrections to Poisson Approximations in General Renewal-Success Processes. J Math Anal Appl. 2005;301:401–418. 35. Spouge J.L. Path Reversal and Islands in the Gapped Alignment of Random Sequences. J Appl Probab. 2004;41:975–983. 36. Smith T.F., Waterman M.S. Identification of common molecular subsequences. J. Mol. Biol. 1981;147:195–197. [PubMed] 37. Aldous D. Probability approximations via the Poisson clumping heuristic. 1st edn. NY: Springer-Verlag; 1989. Articles from Nucleic Acids Research are provided here courtesy of Oxford University Press ## Formats: ### Related citations in PubMed See reviews...See all... ### Cited by other articles in PMC See all... • PubMed PubMed PubMed citations for these articles
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 169, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658236861228943, "perplexity": 4876.529157992331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652873/warc/CC-MAIN-20140305060732-00029-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/comsol-unsteady-1d-diffusion-modeling.247841/
# COMSOL- Unsteady 1D Diffusion modeling 1. Jul 31, 2008 ### muthuran Hello, I am currently using COMSOL to do multi-component unsteady state 1D diffusion modeling (symmetry). The chemical mixture consists of 5 different chemical components which are modeled using classical Fick's second law of diffusion. To ensure mass balance based on a simplistic approach the mole fraction of the fifth chemical at any point in time will be 1- summation of the mole fraction of the 4 chemicals, since the total mole fraction of the multi-component mixture must be unity. I have the following questions 1) Do I use the transient diffusion analysis for my modeling or the Convection-Diffusion analysis (PDE mode) or any other analysis (PDE modes)? Currently I am using the Transient diffusion mass transfer mode. 2) Most importantly how do I incorporate the mass balance (mole fraction chemical 5 = 1- sum(mole fraction of chemicals 1 thro' 4)) in the analysis ? Thanks for the help in advance.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537353873252869, "perplexity": 1605.7555055750806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00088-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.mathportal.org/math-tests/linear-equations/graph-of-linear-function.php?testNo=5
Math Calculators, Lessons and Formulas It is time to solve your math problem mathportal.org • Linear inequalities • Solving linear equations and inequalities • Standard form # Standard form ans: syntax error C DEL ANS ± ( ) ÷ × 7 8 9 4 5 6 + 1 2 3 = 0 . auto next question calculator • Question 1: 1 pts Find the equation of the line graphed on the picture in the standard form. $y=-x-2$ $y+2=-1(x-0)$ $x+y=-2$ $x+y=2$ • Question 2: 1 pts Find the standard form of equation $y-2=-3(x+4).$ • Question 3: 1 pts What is $y=-\dfrac{4}{5}x+3$ written in standard form? $5x+2y=6$ $4x-6y=9$ $2x-3y=12$ $4x+5y=15$ • Question 4: 1 pts What is $y=\dfrac{1}{6}x-1$ written in standard form? $2x-3y=6$ $x+6y=6$ $-2x+3y=9$ $4x-y=12$ • Question 5: 2 pts Find the equation of the line graphed on the picture in the standard form. • Question 6: 2 pts On the graph is sketched the line $x-5y=-5$ • Question 7: 2 pts On the graph is sketched the line $-x-3y=9$ • Question 8: 2 pts On the graph is sketched the line $2x-y=4$ • Question 9: 3 pts Find the equation of the line graphed on the picture in the standard form. • Question 10: 3 pts Find the equation of the line graphed on the picture in the standard form. $0.75x+y=4$ $1.2x+1.2y=4$ $x+0.5y=4$ $1.2x+y=4$ • Question 11: 3 pts Find the equation of the line graphed on the picture in the standard form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825230598449707, "perplexity": 842.9371960422394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739370.8/warc/CC-MAIN-20200814190500-20200814220500-00012.warc.gz"}