url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://zbmath.org/?q=an%3A1244.65122
× ## Approximation algorithm for a system of pantograph equations.(English)Zbl 1244.65122 Summary: We show how to adapt an efficient numerical algorithm to obtain an approximate solution of a system of pantograph equations. This algorithm is based on a combination of Laplace transform and Adomian decomposition method. Numerical examples reveal that the method is quite accurate and efficient, it approximates the solution to a very high degree of accuracy after a few iterates. ### MSC: 65L99 Numerical methods for ordinary differential equations 34K28 Numerical approximation of solutions of functional-differential equations (MSC2010) Full Text: ### References: [1] J. R. Ockendon and A. B. Tayler, “The dynamics of a current collection system for an electric locomotive,” Proceedings of the Royal Society of London A, vol. 322, no. 1551, pp. 447-468, 1971. [2] S. A. Khuri, “A Laplace decomposition algorithm applied to a class of nonlinear differential equations,” Journal of Applied Mathematics, vol. 1, no. 4, pp. 141-155, 2001. · Zbl 0996.65068 [3] S. A. Khuri, “A new approach to Bratu’s problem,” Applied Mathematics and Computation, vol. 147, no. 1, pp. 131-136, 2004. · Zbl 1032.65084 [4] M. Khan, M. Hussain, H. Jafari, and Y. Khan, “Application of laplace decomposition method to solve nonlinear coupled partial differential equations,” World Applied Sciences Journal, vol. 9, pp. 13-19, 2010. [5] M. Y. Ongun, “The Laplace Adomian Decomposition Method for solving a model for HIV infection of CD4+T cells,” Mathematical and Computer Modelling, vol. 53, no. 5-6, pp. 597-603, 2011. · Zbl 1217.65164 [6] A.-M. Wazwaz, “The combined Laplace transform-Adomian decomposition method for handling nonlinear Volterra integro-differential equations,” Applied Mathematics and Computation, vol. 216, no. 4, pp. 1304-1309, 2010. · Zbl 1190.65199 [7] Y. Khan and N. Faraz, “Application of modified Laplace decomposition method for solving boundary layer equation,” Journal of King Saud University-Science, vol. 23, no. 1, pp. 115-119, 2011. [8] E. Yusufo\uglu, “Numerical solution of Duffing equation by the Laplace decomposition algorithm,” Applied Mathematics and Computation, vol. 177, no. 2, pp. 572-580, 2006. · Zbl 1096.65067 [9] G. Adomian, Solving Frontier Problems of Physics: The Decomposition Method, vol. 60 of Fundamental Theories of Physics, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1994. · Zbl 0802.65122 [10] G. Adomian, “A review of the decomposition method in applied mathematics,” Journal of Mathematical Analysis and Applications, vol. 135, no. 2, pp. 501-544, 1988. · Zbl 0671.34053 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176046252250671, "perplexity": 783.9300460155308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00249.warc.gz"}
https://www.shirui.me/2020/01/
## Saturday, January 18, 2020 ### Not SQ again... Structure factor (SQ) is a characterization quantity which is frequently calculated in molecular simulations. It is easily to code the calculation program according to the definition. In most molecular simulations, periodic boundary condition is adopted therefore $$S(\mathbf{q}):=\mathcal{FT}\lbrace\langle\rho(\mathbf{r})\rho(\mathbf{0})\rangle\rbrace(\mathbf{q})$$ is actually a circular convolution, where FFT can give a dramatic boost in contrast to calculate directly. The steps using FFT are: 1. Calculate $\rho(\mathbf{r})$, which is a summation of Dirac-Delta function that can be estimated as a hisotgram; 2. Calculate $\hat{\rho}(\mathbf{q})$ by FFT; 3. $S(\mathbf{q})=|\hat{\rho}(\mathbf{q})|^2$; 4. Calculate mean over modulus: $S(q)=\int S(\mathbf{q})\delta(|\mathbf{q}|-q)\mathrm{d}\mathbf{q}/\int \delta(|\mathbf{q}|-q)\mathrm{d}\mathbf{q}$ Efficiency If the simulation box is divided into $N$ (in 3D systems, $N=N_xN_yN_z$ for example) bins, the FFT gives $O(N\log(N))$ complexity and step 4 is $O(N)$. Generally, in comparison with the direct method, one needs at least loop over number of particles and $N$ bins for $\mathbf{q}$, the number of particles is obviously, way larger than $\log(N)$. For data of most simulations are real numbers, rFFT designed for real inputs could further boost the program. Here is code for a 3D example, i, j, k represent $N_x$, $N_y$, $N_z$ respectively. fftn(a) == np.concatenate([rfftn(a), conj(rfftn(a))[-np.arange(i),-np.arange(j),np.arange(k-k//2-1,0,-1)]], axis=-1) Accuracy 1. Binsize effect. 2. The binsize in histogram should be smaller than half of the minimun of the pair distances in the system according to Nyquist-Shannon sampling theorem. For example, in Lennard-Jones systems, $0.4\sigma$ is a good choice. However, if values of only some small $q$ are concerned, i.e., in some self-assembly systems, there is no need to calculate in the "pair accuracy", the sampling distance smaller than half of the interested domain size is fine. 3. Randomness in the bin 4. Particles in bins are randomly distributed, especially for large binsize mentioned above. The summation $\sum_i \exp(-\mathbb{I}\mathbf{q}\cdot\mathbf{r}_i)$ can be decomposed into $$\sum_\mathrm{bin} \exp(-\mathbb{I}\mathbf{q}\cdot\mathbf{r}_\mathrm{bin})\left(\sum_i \exp( -\mathbb{I}\mathbf{q}\cdot\delta\mathbf{r}_i)\right)$$ The idea is straightforward: for particles inside a bin, i.e., for particle $i$,  position $\mathbf{r}_i$ can be decomposed as $\mathbf{r}_i=\mathbf{r}_\mathrm{bin}+\delta\mathbf{r}_i$, and for particles in the bin, $\delta\mathbf{r}$ is assumed randomly distributed in the bin. The latter summation in the decomposition is thus represented as $n_\mathrm{bin}\overline{\exp(-\mathbb{I}\mathbf{q}\cdot\delta\mathbf{r})}$, the average is approximated according to the distribution of $\delta\mathbf{r}$. If we consider $\delta r$ (1D case, for example) uniformly distributed in $(-L/2, L/2)$ with $L$ be the binsize, the average is $\mathrm{sinc}(qL/2)$. Multiply the sinc function during step 4, the $S(q)$ will be corrected. ## Saturday, January 11, 2020 ### Free energy calculation: umbrella integration Formulae of umbrella sampling method can be found on wikipedia. In umbrella sampling, a reaction coordinate $\xi$ is pre-defined from the atomic coordinates, a bias potential is added to the atoms of interest to keep $\xi$ of the system at a specific window $\xi_w$. The bias form is usually a harmonic potential: $$u^b_w(\xi)=0.5k_w(\xi-\xi_w)^2$$ Therefore, the energy of biased system $A^b_w = A^{ub}_w + u^b_w(\xi)$. The superscript $ub$ is short for "un-biased". In the simulations, we can sample the reaction coordinate in each window and evaluate their distribution $P^b(\xi)$, since the free energy $A=-k_BT\ln(P)$, we have: $$A_w^{ub}(\xi) = -k_BT\ln(P^b_w(\xi))-u^b_w(\xi)-F_w$$ with $F_w$ is a reference free energy of each window and remains an unknown constant. One method to derive $F_w$ is WHAM, in year 2005, Kästner et al. (The Journal of Chemical Physics 123, no. 14 (October 8, 2005): 144104.) have proposed a new method whose idea is to take derivative of $A^u_w$ with respect to $\xi$ to eliminate $F_w$, In this method, $P(\xi)$ is assumed to be analytic, e.g., a Gaussian distribution. The general idea is to expand $A(\xi)$ into Taylor series (at $\langle \xi \rangle_w$): $$A(\xi)=a_1 \xi + a_2 \xi^2 + a_3 \xi^3 +\ldots$$ if $a_i=0$ with $i\ge 3$, the Gaussian form is restored. For systems with higher order terms, the distributions, for example, generally have a non-zero skewness. The crucial step is to determine $\lbrace a_i\rbrace$. In practice, the probability with form of $\exp(-\sum_i a_i \xi^i)$ is difficult to determine, in year 2012, Kästner et al. (The Journal of Chemical Physics 136, no. 23 (June 21, 2012): 234102) studied cases with order $4$ and small $a_3, a_4$. I have tried another more "numerical" method to calculate $\lbrace a_i\rbrace$ when datasets are poorer: fit $\exp(-\sum_i a_i \xi ^i)$ with a Gaussian KDE to find $\lbrace a_i \rbrace$. For some "poor" datasets, higher-order terms of expansion of free energy are required. The normalization factor of the distribution is estimated as $n=\int_{\xi_\mathrm{min}}^{\xi_\mathrm{max}} P(\xi)\mathrm{d}\xi$, the fitting range should be carefully chosen to ensure the convergence of $n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525157809257507, "perplexity": 509.7503742892813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406785.66/warc/CC-MAIN-20200529214634-20200530004634-00397.warc.gz"}
http://math.stackexchange.com/questions/100118/tensor-product-and-evaluation-map-in-k-linear-triangulated-categories
# Tensor product and evaluation map in $k$-linear triangulated categories In $k$-linear triangulated categories, there is an evaluation map $$\oplus_i \text{Hom}(E,A[i])\otimes_k E[-i]\to A .$$ I've learned that in the derived categorie of coherent sheafs on a scheme $X$, the tensor product $V\otimes F[-i]$ is just the direct sum of $\text{dim }V$ copies of $F[-i]$. However, I do not understand the above construction in general triangulated $k$-linear categories. Could anyone help me to understand how the tensorproduct and the evaluation map are defined? Thank you. - Looking at Bondal "Representation of associative algebras and coherent sheaves " suggests that the $\text{coh X}$ case is the general definition, not a consequence. –  Carsten Jan 18 '12 at 11:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614527225494385, "perplexity": 283.3169656526645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064019.39/warc/CC-MAIN-20150827025424-00212-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/108318-inductive-questions-final-algebra-part.html
# Thread: Inductive Questions, final algebra part 1. ## Inductive Questions, final algebra part I need to show these are equal, but for the life of me I cant show it: I know they involve a little trick to show but i just cant see it 1. Prove that $\displaystyle 1/2! + 2 / 3!+ ... + n/(n+1)! = 1 - 1/(n+1)!$ $\displaystyle (k+1)/(k+2)!$ = $\displaystyle 1 - 1/(k+2)!$ 2. Originally Posted by p00ndawg I need to show these are equal, but for the life of me I cant show it: I know they involve a little trick to show but i just cant see it 1. Prove that $\displaystyle 1/2! + 2 / 3!+ ... + n/(n+1)! = 1 - 1/(n+1)!$ $\displaystyle (k+1)/(k+2)!$ = $\displaystyle 1 - 1/(k+2)!$ Suppose n=k is true: $\displaystyle 1/2! + 2 / 3!+ ... + k/(k+1)! = 1 - 1/(k+1)!$ Prove: n=k+1 $\displaystyle 1/2! + 2 / 3!+ ... + k/(k+1)! + (k+1)/(k+2)! = 1 - 1/(k+2)!$ Left Side equals $\displaystyle = 1 - \frac{1}{(k+1)!} + \frac{(k+1)}{(k+2)!} =$ How does this compare to $\displaystyle 1-\frac{1}{(k+2)!}$? If they are equal, which is want we want to prove, then we need to show that: $\displaystyle - \frac{1}{(k+1)!} + \frac{(k+1)}{(k+2)!} = -\frac{1}{(k+2)!}$ Equivalent to: $\displaystyle \frac{1}{(k+1)!} - \frac{(k+1)}{(k+2)!} = \frac{1}{(k+2)!}$ Equivalent to: $\displaystyle \frac{(k+2)-(k+1)}{(k+2)!} = \frac{1}{(k+2)!}$ which is clearly true. I hope this helps!! 3. oh man thanks! I didnt even THINK of subtracting the 1 at the beginning, and I thought I had tried everything!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901282012462616, "perplexity": 685.2963190027737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870771.86/warc/CC-MAIN-20180528024807-20180528044807-00266.warc.gz"}
http://www.lqp2.org/node/1339
# The holographic Hadamard condition on asymptotically Anti-de Sitter spacetimes Michał Wrochna December 04, 2016 In the setting of asymptotically Anti-de Sitter spacetimes, we consider Klein-Gordon fields subject to Dirichlet boundary conditions, with mass satisfying the Breitenlohner-Freedman bound. We introduce a condition on the b-wave front set of two-point functions of quantum fields, which locally in the bulk amounts to the usual Hadamard condition, and which moreover allows to estimate wave front sets for the holographically induced theory on the boundary. We prove the existence of two-point functions satisfying this condition, and show their uniqueness modulo terms that have smooth Schwartz kernel in the bulk and have smooth restriction to the boundary. Finally, using Vasy's propagation of singularities theorem, we prove an analogue of Duistermaat and Hörmander theorem on distinguished parametrices. Keywords: anti de Sitter, Hadamard states, holography
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512585401535034, "perplexity": 840.1485884281188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00451-ip-10-171-10-108.ec2.internal.warc.gz"}
http://advancedintegrals.com/2017/01/integral-representation-of-generalized-euler-sums/
# Integral representation of generalized Euler sums $$\sum_{k=1}^\infty\frac{H^{(p)}_k}{k^q} = \zeta(p)\zeta(q) +(-1)^{p}\frac{1}{ (p-1)!}\int^1_0\frac{\mathrm{Li}_q(x)\log(x)^{p-1}}{1-x}\,dx$$ $$\textit{proof}$$ Note that $$\psi_0(a+1)= \int^1_0\frac{1-x^a}{1-x}\,dx$$ By differentiating with respect to $a$ , $p$ times we have $$\psi_p(a+1) = \frac{\partial}{\partial a^p}\int^1_0\frac{1-x^a}{1-x}\,dx$$ $$\psi_p(a+1) = -\int^1_0\frac{x^a\log(x)^{p}}{1-x}\,dx$$ Let $a =k$ $$\psi_{p-1}(k+1) = -\int^1_0\frac{x^k\log(x)^{p-1}}{1-x}\,dx$$ Use the relation to polygamma $$H^{(p)}_k = \zeta(p) +(-1)^{p-1}\frac{\psi_{p-1}(k+1)}{ (p-1)!}$$ Hence we have $$H^{(p)}_k = \zeta(p) +(-1)^{p}\frac{1}{ (p-1)!}\int^1_0\frac{x^k\log(x)^{p-1}}{1-x}\,dx$$ Now divide by $k^q$ and sum with respect to $k$ $$\sum_{k=1}^\infty\frac{H^{(p)}_k}{k^q} = \zeta(p)\zeta(q) +(-1)^{p}\frac{1}{ (p-1)!}\int^1_0\frac{\mathrm{Li}_q(x)\log(x)^{p-1}}{1-x}\,dx$$ This entry was posted in Euler sum, PolyGamma, Polylogarithm and tagged , , , , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986466765403748, "perplexity": 1751.504954075681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511365.50/warc/CC-MAIN-20181018001031-20181018022531-00226.warc.gz"}
https://swmath.org/?term=natural%20coarse%20grid
• BoomerAMG • Referenced in 195 articles [sw00086] • process of coarse-grid selection, in particular, is fundamentally sequential in nature. We have previously ... Henson and J. E. Jones, Coarse grid selection for parallel algebraic multigrid, in: A. Ferriera... • MUSCOP • Referenced in 7 articles [sw06143] • iteration with globalization on the basis of natural level functions. We investigate the LISA-Newton ... grid Newton-Picard preconditioner for LISA and prove grid independent convergence of the classical variant ... grid approximation for the Lagrangian Hessian which fits well in the two-grid Newton-Picard ... fine grid controls the accuracy of the solution while the quality of the coarse grid... • GAMPACK • Referenced in 2 articles [sw13441] • constructing a multilevel hierarchy of matrices that naturally adapts to the permeability channels ... paradigm. The construction of the coarse matrix hierarchy and grid transfer operators poses a particular...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101040720939636, "perplexity": 3948.6970280371356}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00433.warc.gz"}
http://math.stackexchange.com/questions/275306/existence-and-uniqueness-of-pde-with-solutions-in-wk-p-with-p-neq-2
# Existence and uniqueness of PDE with solutions in $W^{k,p}$ with $p \neq 2$? I just realised that i have never seen the space $W^{k,p}$, $p\neq 2$, used in showing existence/uniqueness to some PDE. Usually books/lectures build up theory about $W^{k,p}$ (like certain compact embeddings) spaces and then immediately show well-posedness to some PDE in the space $H^k$. Typically with Poisson's equation. I am very curious about how to show existence to PDEs where their solutions live in $W^{k,p}$, $p \neq 2$. Obviously these spaces are not Hilbert so Lax-Milgram goes out of the window. Can someone give me a cool example or cite some book that goes through an existence proof? Thanks - Let $\Omega\subset\mathbb{R}^N$ be a bounded domain, $p\in (1,\infty)$, $q\in (1,\infty)$, $\frac{1}{p}+\frac{1}{q}=1$, $f\in L^q(\Omega)$. Consider the problem $$\tag{1} \left\{ \begin{array}{rl} -\operatorname{div}(|\nabla u|^{p-2}\nabla u)=f &\mbox{ in \Omega} \\ u\in W_0^{1,p}(\Omega) &\mbox{} \end{array} \right.$$ The operator $\Delta_pu=\operatorname{div}(|\nabla u|^{p-2}\nabla u)$ is called $p$-Laplacian. We say that $u\in W_0^{1,p}(\Omega)$ is a solution (weak solution) of (1) if forall $\phi\in C_0^\infty(\Omega)$ $$\tag{2}\int_\Omega |\nabla u|^{p-2}\nabla u\nabla \phi=\int_\Omega f\phi$$ One way to solve the equation (2) is to consider the energy functional $F:W_0^{1,p}(\Omega)\rightarrow\mathbb{R}$ associated to it: $$F_p(u)=\frac{1}{p}\int_\Omega|\nabla u|^p -\int_\Omega fu$$ It is a pleasurable exercise to show that $F_p$ is a strictly convex functional and hence it must have an unique minimizer. Moreover you can show that if $u\in W_0^{1,p}$ is the minimizer, hence $$\langle F_p'(u),\phi\rangle=\int_\Omega |\nabla u|^{p-2}\nabla u\nabla \phi-\int_\Omega f\phi=0,\ \forall\ \phi\in W_0^{1,p}$$ To get a problem where the solution lies in $W^{k,p}$ with $k>1$ you need to ask more differentiability, for example, in the $p$-laplacian problem we need only that $k=1$. Note 1: Observe that standard Dirichlet problem is included here and the same technique applies to solve this problem. Note 2: If you ask more regularity of $f$, for example $f\in L^{r}$ with $r>\frac{N}{p}$, you can show that in fact $u\in C_{loc}^{1,\alpha}(\Omega)$ for some $\alpha\in (0,1)$. - Thanks, very informative answer. Is there a way to do it without using calculus of variations (a topic which I am not very fond of)? –  pde_lover Jan 10 '13 at 22:18 I know that there is a banach space Lax-Milgram version, but i dont know if it is possible to solve this problem by using it. There are the notions of $S^+$ and pseudo-monotone operators and with it the Brezis theorem, but i think that in some way they are related with Calculus of Variations. –  Tomás Jan 10 '13 at 22:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97258061170578, "perplexity": 188.8231209539973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447556807.60/warc/CC-MAIN-20141224185916-00011-ip-10-231-17-201.ec2.internal.warc.gz"}
https://en.m.wikipedia.org/wiki/Spontaneous_symmetry_breaking
# Spontaneous symmetry breaking Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state ends up in an asymmetric state.[1][2][3] In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. ## Overview In explicit symmetry breaking, if we consider two outcomes, the probability of a pair of outcomes can be different. By definition, spontaneous symmetry breaking requires the existence of a symmetric probability distribution—any pair of outcomes has the same probability. In other words, the underlying laws[clarification needed] are invariant under a symmetry transformation. The system, as a whole[clarification needed], changes under such transformations. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect. ## Examples ### Mexican hat potential Graph of Goldstone's "Mexican hat" potential function ${\displaystyle V(\phi )}$ . Consider a symmetric upward dome with a trough circling the bottom. If a ball is put at the very peak of the dome, the system is symmetric with respect to a rotation around the center axis. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, a point of lowest energy. Afterward, the ball has come to a rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not.[4] In the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory. The relevant Lagrangian of a scalar field ${\displaystyle \phi }$ , which essentially dictates how a system behaves, can be split up into kinetic and potential terms, ${\displaystyle {\mathcal {L}}=\partial ^{\mu }\phi \partial _{\mu }\phi -V(\phi ).}$ (1) It is in this potential term ${\displaystyle V(\phi )}$  that the symmetry breaking is triggered. An example of a potential, due to Jeffrey Goldstone[5] is illustrated in the graph at the right. ${\displaystyle V(\phi )=-10|\phi |^{2}+|\phi |^{4}\,}$ . (2) This potential has an infinite number of possible minima (vacuum states) given by ${\displaystyle \phi ={\sqrt {5}}e^{i\theta }}$ . (3) for any real θ between 0 and 2π. The system also has an unstable vacuum state corresponding to Φ = 0. This state has a U(1) symmetry. However, once the system falls into a specific stable vacuum state (amounting to a choice of θ), this symmetry will appear to be lost, or "spontaneously broken". In fact, any other choice of θ would have exactly the same energy, implying the existence of a massless Nambu–Goldstone boson, the mode running around the circle at the minimum of this potential, and indicating there is some memory of the original symmetry in the Lagrangian. ### Other examples • For ferromagnetic materials, the underlying laws are invariant under spatial rotations. Here, the order parameter is the magnetization, which measures the magnetic dipole density. Above the Curie temperature, the order parameter is zero, which is spatially invariant, and there is no symmetry breaking. Below the Curie temperature, however, the magnetization acquires a constant nonvanishing value, which points in a certain direction (in the idealized situation where we have full equilibrium; otherwise, translational symmetry gets broken as well). The residual rotational symmetries which leave the orientation of this vector invariant remain unbroken, unlike the other rotations which do not and are thus spontaneously broken. • The laws describing a solid are invariant under the full Euclidean group, but the solid itself spontaneously breaks this group down to a space group. The displacement and the orientation are the order parameters. • General relativity has a Lorentz symmetry, but in FRW cosmological models, the mean 4-velocity field defined by averaging over the velocities of the galaxies (the galaxies act like gas particles at cosmological scales) acts as an order parameter breaking this symmetry. Similar comments can be made about the cosmic microwave background. • For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetry. Like the ferromagnetic example, there is a phase transition at the electroweak temperature. The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification. • In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetry. • Take a thin cylindrical plastic rod and push both ends together. Before buckling, the system is symmetric under rotation, and so visibly cylindrically symmetric. But after buckling, it looks different, and asymmetric. Nevertheless, features of the cylindrical symmetry are still there: ignoring friction, it would take no force to freely spin the rod around, displacing the ground state in time, and amounting to an oscillation of vanishing frequency, unlike the radial oscillations in the direction of the buckle. This spinning mode is effectively the requisite Nambu–Goldstone boson. • Consider a uniform layer of fluid over an infinite horizontal plane. This system has all the symmetries of the Euclidean plane. But now heat the bottom surface uniformly so that it becomes much hotter than the upper surface. When the temperature gradient becomes large enough, convection cells will form, breaking the Euclidean symmetry. • Consider a bead on a circular hoop that is rotated about a vertical diameter. As the rotational velocity is increased gradually from rest, the bead will initially stay at its initial equilibrium point at the bottom of the hoop (intuitively stable, lowest gravitational potential). At a certain critical rotational velocity, this point will become unstable and the bead will jump to one of two other newly created equilibria, equidistant from the center. Initially, the system is symmetric with respect to the diameter, yet after passing the critical velocity, the bead ends up in one of the two new equilibrium points, thus breaking the symmetry. ## Spontaneous symmetry breaking in physics Spontaneous symmetry breaking illustrated: At high energy levels (left) the ball settles in the center, and the result is symmetric. At lower energy levels (right), the overall "rules" remain symmetric, but the symmetric "Mexican hat" enforces an asymmetric outcome, since eventually the ball must rest at some random spot on the bottom, "spontaneously", and not all others. ### Particle physics In particle physics the force carrier particles are normally specified by field equations with gauge symmetry; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount. The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. "Hidden" is a better term than "broken", because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking (SSB) because nothing (that we know of) breaks the symmetry in the equations.[6]:194–195 #### Chiral symmetry Chiral symmetry breaking is an example of spontaneous symmetry breaking affecting the chiral symmetry of the strong interactions in particle physics. It is a property of quantum chromodynamics, the quantum field theory describing these interactions, and is responsible for the bulk of the mass (over 99%) of the nucleons, and thus of all common matter, as it converts very light bound quarks into 100 times heavier constituents of baryons. The approximate Nambu–Goldstone bosons in this spontaneous symmetry breaking process are the pions, whose mass is an order of magnitude lighter than the mass of the nucleons. It served as the prototype and significant ingredient of the Higgs mechanism underlying the electroweak symmetry breaking. #### Higgs mechanism The strong, weak, and electromagnetic forces can all be understood as arising from gauge symmetries. The Higgs mechanism, the spontaneous symmetry breaking of gauge symmetries, is an important component in understanding the superconductivity of metals and the origin of particle masses in the standard model of particle physics. One important consequence of the distinction between true symmetries and gauge symmetries, is that the spontaneous breaking of a gauge symmetry does not give rise to characteristic massless Nambu–Goldstone physical modes, but only massive modes, like the plasma mode in a superconductor, or the Higgs mode observed in particle physics. In the standard model of particle physics, spontaneous symmetry breaking of the SU(2) × U(1) gauge symmetry associated with the electro-weak force generates masses for several particles, and separates the electromagnetic and weak forces. The W and Z bosons are the elementary particles that mediate the weak interaction, while the photon mediates the electromagnetic interaction. At energies much greater than 100 GeV all these particles behave in a similar manner. The Weinberg–Salam theory predicts that, at lower energies, this symmetry is broken so that the photon and the massive W and Z bosons emerge.[7] In addition, fermions develop mass consistently. Without spontaneous symmetry breaking, the Standard Model of elementary particle interactions requires the existence of a number of particles. However, some particles (the W and Z bosons) would then be predicted to be massless, when, in reality, they are observed to have mass. To overcome this, spontaneous symmetry breaking is augmented by the Higgs mechanism to give these particles mass. It also suggests the presence of a new particle, the Higgs boson, detected in 2012. Superconductivity of metals is a condensed-matter analog of the Higgs phenomena, in which a condensate of Cooper pairs of electrons spontaneously breaks the U(1) gauge symmetry associated with light and electromagnetism. ### Condensed matter physics Most phases of matter can be understood through the lens of spontaneous symmetry breaking. For example, crystals are periodic arrays of atoms that are not invariant under all translations (only under a small subset of translations by a lattice vector). Magnets have north and south poles that are oriented in a specific direction, breaking rotational symmetry. In addition to these examples, there are a whole host of other symmetry-breaking phases of matter including nematic phases of liquid crystals, charge- and spin-density waves, superfluids and many others. There are several known examples of matter that cannot be described by spontaneous symmetry breaking, including: topologically ordered phases of matter like fractional quantum Hall liquids, and spin-liquids. These states do not break any symmetry, but are distinct phases of matter. Unlike the case of spontaneous symmetry breaking, there is not a general framework for describing such states.[8] #### Continuous symmetry The ferromagnet is the canonical system which spontaneously breaks the continuous symmetry of the spins below the Curie temperature and at h = 0, where h is the external magnetic field. Below the Curie temperature the energy of the system is invariant under inversion of the magnetization m(x) such that m(x) = −m(−x). The symmetry is spontaneously broken as h → 0 when the Hamiltonian becomes invariant under the inversion transformation, but the expectation value is not invariant. Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under consideration. For example, in a magnet, the order parameter is the local magnetization. Spontaneously breaking of a continuous symmetry is inevitably accompanied by gapless (meaning that these modes do not cost any energy to excite) Nambu–Goldstone modes associated with slow long-wavelength fluctuations of the order parameter. For example, vibrational modes in a crystal, known as phonons, are associated with slow density fluctuations of the crystal's atoms. The associated Goldstone mode for magnets are oscillating waves of spin known as spin-waves. For symmetry-breaking states, whose order parameter is not a conserved quantity, Nambu–Goldstone modes are typically massless and propagate at a constant velocity. An important theorem, due to Mermin and Wagner, states that, at finite temperature, thermally activated fluctuations of Nambu–Goldstone modes destroy the long-range order, and prevent spontaneous symmetry breaking in one- and two-dimensional systems. Similarly, quantum fluctuations of the order parameter prevent most types of continuous symmetry breaking in one-dimensional systems even at zero temperature (an important exception is ferromagnets, whose order parameter, magnetization, is an exactly conserved quantity and does not have any quantum fluctuations). Other long-range interacting systems such as cylindrical curved surfaces interacting via the Coulomb potential or Yukawa potential has been shown to break translational and rotational symmetries.[9] It was shown, in the presence of a symmetric Hamiltonian, and in the limit of infinite volume, the system spontaneously adopts a chiral configuration, i.e. breaks mirror plane symmetry. ### Dynamical symmetry breaking Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking where the ground state of the system has reduced symmetry properties compared to its theoretical description (Lagrangian). Dynamical breaking of a global symmetry is a spontaneous symmetry breaking, that happens not at the (classical) tree level (i.e. at the level of the bare action), but due to quantum corrections (i.e. at the level of the effective action). Dynamical breaking of a gauge symmetry [1] is subtler. In the conventional spontaneous gauge symmetry breaking, there exists an unstable Higgs particle in the theory, which drives the vacuum to a symmetry-broken phase (see e.g. Electroweak interaction). In dynamical gauge symmetry breaking, however, no unstable Higgs particle operates in the theory, but the bound states of the system itself provide the unstable fields that render the phase transition. For example, Bardeen, Hill, and Lindner published a paper which attempts to replace the conventional Higgs mechanism in the standard model, by a DSB that is driven by a bound state of top-antitop quarks (such models, where a composite particle plays the role of the Higgs boson, are often referred to as "Composite Higgs models").[10] Dynamical breaking of gauge symmetries is often due to creation of a fermionic condensate; for example the quark condensate, which is connected to the dynamical breaking of chiral symmetry in quantum chromodynamics. Conventional superconductivity is the paradigmatic example from the condensed matter side, where phonon-mediated attractions lead electrons to become bound in pairs and then condense, thereby breaking the electromagnetic gauge symmetry. ## Generalisation and technical usage For spontaneous symmetry breaking to occur, there must be a system in which there are several equally likely outcomes. The system as a whole is therefore symmetric with respect to these outcomes. However, if the system is sampled (i.e. if the system is actually used or interacted with in any way), a specific outcome must occur. Though the system as a whole is symmetric, it is never encountered with this symmetry, but only in one specific asymmetric state. Hence, the symmetry is said to be spontaneously broken in that theory. Nevertheless, the fact that each outcome is equally likely is a reflection of the underlying symmetry, which is thus often dubbed "hidden symmetry", and has crucial formal consequences. (See the article on the Goldstone boson.) When a theory is symmetric with respect to a symmetry group, but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. The theory must not dictate which member is distinct, only that one is. From this point on, the theory can be treated as if this element actually is distinct, with the proviso that any results found in this way must be resymmetrized, by taking the average of each of the elements of the group being the distinct one. The crucial concept in physics theories is the order parameter. If there is a field (often a background field) which acquires an expectation value (not necessarily a vacuum expectation value) which is not invariant under the symmetry in question, we say that the system is in the ordered phase, and the symmetry is spontaneously broken. This is because other subsystems interact with the order parameter, which specifies a "frame of reference" to be measured against. In that case, the vacuum state does not obey the initial symmetry (which would keep it invariant, in the linearly realized Wigner mode in which it would be a singlet), and, instead changes under the (hidden) symmetry, now implemented in the (nonlinear) Nambu–Goldstone mode. Normally, in the absence of the Higgs mechanism, massless Goldstone bosons arise. The symmetry group can be discrete, such as the space group of a crystal, or continuous (e.g., a Lie group), such as the rotational symmetry of space. However, if the system contains only a single spatial dimension, then only discrete symmetries may be broken in a vacuum state of the full quantum theory, although a classical solution may break a continuous symmetry. ## Nobel Prize On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking. Physicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions.[11] This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a "just so" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon. ## Notes • ^ Note that (as in fundamental Higgs driven spontaneous gauge symmetry breaking) the term "symmetry breaking" is a misnomer when applied to gauge symmetries. ## References 1. ^ Vladimir A. Miransky, Dynamical Symmetry Breaking in Quantum Field Theories. Pg 15. ISBN 9810215584 2. ^ Patterns of Symmetry Breaking. Edited by Henryk Arodz, Jacek Dziarmaga, Wojciech Hubert Zurek. Pg 141. 3. ^ Bubbles, Voids and Bumps in Time: The New Cosmology. Edited by James Cornell. Pg 125. 4. ^ Gerald M. Edelman, Bright Air, Brilliant Fire: On the Matter of the Mind (New York: BasicBooks, 1992) 203. 5. ^ Goldstone, J. (1961). "Field theories with " Superconductor " solutions". Il Nuovo Cimento. 19: 154–164. Bibcode:1961NCim...19..154G. doi:10.1007/BF02812722. 6. ^ Steven Weinberg (20 April 2011). Dreams of a Final Theory: The Scientist's Search for the Ultimate Laws of Nature. Knopf Doubleday Publishing Group. ISBN 978-0-307-78786-6. 7. ^ A Brief History of Time, Stephen Hawking, Bantam; 10th anniversary edition (September 1, 1998). pp. 73–74. 8. ^ Chen, Xie; Gu, Zheng-Cheng; Wen, Xiao-Gang (2010). "Local unitary transformation, long-range quantum entanglement, wave function renormalization, and topological order". Phys. Rev. B. 82: 155138. arXiv:1004.3835. Bibcode:2010PhRvB..82o5138C. doi:10.1103/physrevb.82.155138. 9. ^ Kohlstedt, K.L.; Vernizzi, G.; Solis, F.J.; Olvera de la Cruz, M. (2007). "Spontaneous Chirality via Long-range Electrostatic Forces". Physical Review Letters. 99: 030602. arXiv:0704.3435. Bibcode:2007PhRvL..99c0602K. doi:10.1103/PhysRevLett.99.030602. PMID 17678276. 10. ^ William A. Bardeen; Christopher T. Hill; Manfred Lindner (1990). "Minimal dynamical symmetry breaking of the standard model". Physical Review D. 41 (5): 1647–1660. Bibcode:1990PhRvD..41.1647B. doi:10.1103/PhysRevD.41.1647. 11. ^ The Nobel Foundation. "The Nobel Prize in Physics 2008". nobelprize.org. Retrieved January 15, 2008.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370161294937134, "perplexity": 613.1019515558826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999000.76/warc/CC-MAIN-20190619143832-20190619165832-00477.warc.gz"}
https://tex.stackexchange.com/questions/125613/toc-showing-page-number-and-then-the-chapter-title
# ToC showing page number and then the chapter title I would like to have the Table of Contents of the book I am editing showing first the pages of the chapters and then the chapter titles. Something like this book: I am working with XeTeX (through LyX) and my document class is Memoir. • Do you want this behaviour just for chapter entries or for all sectional units in the ToC? Which document class are you using? Jul 26 '13 at 20:59 • Hello. My ToC is made up of Parts and Chapters, there are no other sections in the book. The document class is Memoir. Jul 26 '13 at 21:01 • So, just to be sure, this is to apply to parts and chapter entries, or just to chapter entries? Jul 26 '13 at 21:08 • Parts and chapters, you are right. Jul 26 '13 at 21:18 Here's one possibility (chapter number for unnumbered chpaters have been included before the corresponding titles, as has been requested in a comment): \documentclass{memoir} \usepackage{graphicx} \renewcommand\partnumberline[1]{} \renewcommand\chapternumberline[1]{#1~\raisebox{.2ex}{\scalebox{0.75}{\textbullet}~}} \renewcommand\cftpartpagefont{\huge\bfseries} \renewcommand\cftpartfont{\LARGE\bfseries} \renewcommand\cftchapterpagefont{\normalfont\large\itshape} \renewcommand\cftchapterfont{\normalfont} \newlength\ToCindent \setlength\ToCindent{4em} \makeatletter \renewcommand*{\l@part}[2]{% \ifnum \c@tocdepth >-2\relax \cftpartbreak \begingroup {\leftskip0pt\noindent \interlinepenalty\@M \leavevmode \parbox[t]{\ToCindent}{\makebox[2em][r]{\cftpartpagefont #2}\hfill}% \parbox[t]{\dimexpr\textwidth-\ToCindent\relax}{\cftpartfont #1}% } \par\nobreak \global\@nobreaktrue \everypar{\global\@nobreakfalse\everypar{}}% \endgroup \fi} \newcommand*{\l@mychap}[3]{% \def\@chapapp{#3} \vskip2ex% \par% \noindent\parbox[t]{\ToCindent}{\makebox[2em][r]{\cftchapterpagefont#2}\hfill}% \parbox[t]{\dimexpr\textwidth-\ToCindent\relax}{\cftchapterfont#1}\par% } \renewcommand*{\l@chapter}[2]{% \l@mychap{#1}{#2}{\chaptername}% } \renewcommand*{\l@appendix}[2]{% \l@mychap{#1}{#2}{\appendixname}% } \makeatother \begin{document} \tableofcontents* \part{A test part} \chapter{Test chapter with a really long title spanning several lines just for the example} \chapter{Another test chapter} \chapter{Another test chapter with an interesting title} \part{Another test part} \chapter{Test chapter with a really long title spanning several lines just for the example} \chapter{Another test chapter} \chapter{Another test chapter with an interesting title} \part{Yet another test part} \chapter{Test chapter with a really long title spanning several lines just for the example} \chapter{Another test chapter} \chapter{Another test chapter with an interesting title} \end{document} I add here another option showing the page number and chapter number separated by a bullet, as was originally required in the comment; I prefer the previous solution since this one might result in confusion: \documentclass{memoir} \usepackage{graphicx} \renewcommand\partnumberline[1]{} \renewcommand\chapternumberline[1]{\makebox[0pt][l]{% \llap{\raisebox{.2ex}{\scalebox{0.75}{\textbullet}}\,#1\hspace{1.2em}}}} \renewcommand\cftpartpagefont{\huge\bfseries} \renewcommand\cftpartfont{\LARGE\bfseries} \renewcommand\cftchapterpagefont{\normalfont} \renewcommand\cftchapterfont{\normalfont} \newlength\ToCindent \setlength\ToCindent{4.5em} \makeatletter \renewcommand*{\l@part}[2]{% \ifnum \c@tocdepth >-2\relax \cftpartbreak \begingroup {\leftskip0pt\noindent \interlinepenalty\@M \leavevmode \parbox[t]{\ToCindent}{\makebox[2em][r]{\cftpartpagefont #2}\hfill}% \parbox[t]{\dimexpr\textwidth-\ToCindent\relax}{\cftpartfont #1}% } \par\nobreak \global\@nobreaktrue \everypar{\global\@nobreakfalse\everypar{}}% \endgroup \fi} \newcommand*{\l@mychap}[3]{% \def\@chapapp{#3} \vskip2ex% \par% \noindent\parbox[t]{\ToCindent}{\makebox[2em][r]{\cftchapterpagefont#2}\hfill}% \parbox[t]{\dimexpr\textwidth-\ToCindent\relax}{\cftchapterfont#1}\par% } \renewcommand*{\l@chapter}[2]{% \l@mychap{#1}{#2}{\chaptername}% } \renewcommand*{\l@appendix}[2]{% \l@mychap{#1}{#2}{\appendixname}% } \makeatother \begin{document} \tableofcontents* \part{A test part} \chapter{Test chapter with a really long title spanning several lines just for the example} \chapter{Another test chapter} \chapter{Another test chapter with an interesting title} \part{Another test part} \chapter{Test chapter with a really long title spanning several lines just for the example} \chapter{Another test chapter} \chapter{Another test chapter with an interesting title} \part{Yet another test part} \chapter{Test chapter with a really long title spanning several lines just for the example} \chapter{Another test chapter} \chapter{Another test chapter with an interesting title} \end{document} The \ToCindent length controls the separation between page numbers and titles. • Thank you for the solution! Would be possible to add the chapter number before the chapter title for numbered chapters? Maybe something like a middot (·) might be useful to separate pages from chapter number. Jul 26 '13 at 23:45 • @CatoPhilosophus please see my updated answer. I included the chapter numbers as part of the entry (and changed the font for chapter page numbers), as I think this is better to avoid confusions with the page numbers. Jul 26 '13 at 23:58 • @CatoPhilosophus I added another option to my answer showing the layout you suggested in your comment; I personally find the first option less confusing. Jul 27 '13 at 1:28 • Nice solution! I prefer the first solution as well. I wonder if this can be made to work with other classes (like Koma-Script). – user9424 Jul 27 '13 at 12:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504121899604797, "perplexity": 2154.1154170230807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00502.warc.gz"}
http://math.stackexchange.com/users/60721/alderz?tab=activity
Alderz Reputation Top tag Next privilege 50 Rep. Comment everywhere Feb22 revised How to quickly recognize closed sets and open sets in $\mathbb{R}^2$ added 158 characters in body Feb22 comment How to quickly recognize closed sets and open sets in $\mathbb{R}^2$ That's true, I will edit my answer. Feb22 answered How to quickly recognize closed sets and open sets in $\mathbb{R}^2$ Feb5 comment Determine the region of convergence of series of complex functions I am sorry, Did. It's true it is not fully answered. I have unaccepted the answer now. As for the problem, I have not been able to do further progress myself. Jan29 asked Determine the region of convergence of series of complex functions Sep24 awarded Autobiographer Feb3 awarded Supporter Feb2 awarded Scholar Feb2 accepted Better proof of $(x_1^2+…+x_n^2)^2 \leq (x_1+…+x_n)(x_1^3+…+x_n^3)$? Feb2 comment Better proof of $(x_1^2+…+x_n^2)^2 \leq (x_1+…+x_n)(x_1^3+…+x_n^3)$? Very nice, thanks! Feb2 comment Better proof of $(x_1^2+…+x_n^2)^2 \leq (x_1+…+x_n)(x_1^3+…+x_n^3)$? Okay, thanks! I found it somewhat artificial, but I will stick with it. Feb2 awarded Student Feb2 awarded Editor Feb2 revised Better proof of $(x_1^2+…+x_n^2)^2 \leq (x_1+…+x_n)(x_1^3+…+x_n^3)$? edited body Feb2 asked Better proof of $(x_1^2+…+x_n^2)^2 \leq (x_1+…+x_n)(x_1^3+…+x_n^3)$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377195596694946, "perplexity": 2161.3572763628517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657041.90/warc/CC-MAIN-20150417045737-00192-ip-10-235-10-82.ec2.internal.warc.gz"}
https://slideplayer.com/slide/7767097/
# Objectives Define and use imaginary and complex numbers. ## Presentation on theme: "Objectives Define and use imaginary and complex numbers."— Presentation transcript: Objectives Define and use imaginary and complex numbers. Solve quadratic equations with complex roots. You can see in the graph of f(x) = x2 + 1 below that f has no real zeros. If you solve the corresponding equation 0 = x2 + 1, you find that x = ,which has no real solutions. However, you can find solutions if you define the square root of negative numbers, which is why imaginary numbers were invented. The imaginary unit i is defined as You can use the imaginary unit to write the square root of any negative number. Express the number in terms of i. Factor out –1. Product Property. Simplify. Multiply. Express in terms of i. Express the number in terms of i. Solve the equation. Check x2 = –144 –144 (12i)2 144i 2 144(–1)  Take square roots. Express in terms of i. Check x2 = –144 –144 (12i)2 144i 2 144(–1) x2 = –144 –144 144(–1) 144i 2 (–12i)2 Solve the equation. 5x = 0 5x = 0 5(18)i 2 +90 90(–1) +90 Check Solve and check the equations. x2 = –36 x = 0 9x = 0 A complex number is a number that can be written in the form a + bi, where a and b are real numbers and i = The set of real numbers is a subset of the set of complex numbers C. Every complex number has a real part a and an imaginary part b. Real numbers are complex numbers where b = 0 Real numbers are complex numbers where b = 0. Imaginary numbers are complex numbers where a = 0 and b ≠ 0. These are sometimes called pure imaginary numbers. Two complex numbers are equal if and only if their real parts are equal and their imaginary parts are equal. Find the values of x and y that make the equation 4x + 10i = 2 – (4y)i true . Real parts 4x + 10i = 2 – (4y)i Imaginary parts Equate the imaginary parts. Equate the real parts. 10 = –4y 4x = 2 Solve for y. Solve for x. Find the values of x and y that make each equation true. 2x – 6i = –8 + (20y)i –8 + (6y)i = 5x – i Find the zeros of the function. Think “Complete the Square” f(x) = x2 + 10x + 26 x2 + 10x + 26 = 0 Set equal to 0. x2 + 10x = –26 + Rewrite. x2 + 10x + 25 = – Add to both sides. (x + 5)2 = –1 Factor. Take square roots. Simplify. Find the zeros of the function. Think “Complete the Square” g(x) = x2 + 4x + 12 h(x) = x2 – 8x + 18 Find each complex conjugate. The solutions and are related. These solutions are a complex conjugate pair. Their real parts are equal and their imaginary parts are opposites. The complex conjugate of any complex number a + bi is the complex number a – bi. Find each complex conjugate. B. 6i A i 0 + 6i 8 + 5i Write as a + bi. Write as a + bi. 0 – 6i Find a – bi. 8 – 5i Find a – bi. –6i Simplify. Check It Out! Example 5 Find each complex conjugate. A. 9 – i B. 9 + (–i) 9 – (–i) 9 + i C. –8i 0 + (–8)i 0 – (–8)i 8i Lesson Quiz 1. Express in terms of i. Solve each equation. 3. x2 + 8x +20 = 0 2. 3x = 0 4. Find the values of x and y that make the equation 3x +8i = 12 – (12y)i true. 5. Find the complex conjugate of
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536030650138855, "perplexity": 587.7692298070365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886095.7/warc/CC-MAIN-20200704073244-20200704103244-00458.warc.gz"}
https://physics.stackexchange.com/questions/500447/commutation-of-position-four-vector-with-spacetime-derivatives
# Commutation of position four-vector with spacetime derivatives I am trying to understand a simple demonstration in Ashok Das' Lectures in QFT. He does the following on p. 134 $$[P_\mu,M_{\nu\lambda}]=[\partial_\mu ,x_\nu\partial_\lambda-x_\lambda\partial_\nu]=\partial_\mu(x_\nu\partial_\lambda-x_\lambda\partial_\nu)-(x_\nu\partial_\lambda-x_\lambda\partial_\nu)\partial_\mu\tag{4.37}$$ So far everything is fine, just replaced the expressions for $$P_\mu$$ and $$M_{\nu\lambda}$$. However, the next step is where I have trouble. I calculate $$=\eta_{\mu\nu}\partial_\lambda-\eta_{\mu\lambda}\partial_\nu-x_\nu\partial_\lambda\partial_\mu+x_\lambda\partial_\nu\partial_\mu.$$ I understand where the $$\eta$$ come from in the first few terms, but according to the explanation, the 3rd and four terms must vanish. However, for them to vanish, it would have to mean that $$x$$ and $$\partial$$ commute, and I am not sure why that would be the case. If they commute, wouldn't that change the definition of $$M_{\nu\lambda}$$? After all, it's terms would commute and maybe even cancel out! I know that there is something here that I am understanding wrong, but I'm not sure what it is. • I'm afraid that of all the books that I have looked into, Das' was the only one that went into detail into the calculations, all others so far just give you the result. – Nick Heumann Sep 5 '19 at 23:15 • This calculation is just really sloppy, he’s misapplying the product rule. Just act with both sides on a test function and you’ll see that your two unwanted terms actually each appear twice, and cancel each other. – knzhou Sep 6 '19 at 4:42 After expanding out \begin{align} [P_a,M_{bc}]&=[\partial_a ,x_b\partial_c-x_c\partial_b]\\ &=\partial_a (x_b \partial_c-x_c\partial_b)-(x_b\partial_c-x_c\partial_b)\partial_a \end{align} it is not formally correct to apply that first $$\partial_a$$ to only the $$x_{b,c}$$ terms (which should generate those $$\eta_{a\{b,c\}}$$ terms); if there were a test function in the mix this would miss several terms that apply $$\partial_a$$ directly to the test function, due to the product rule. So assuming that the product rule still applies here you instead would get \begin{align} [P_a,M_{bc}]&= (\partial_a x_b) \partial_c- (\partial_a x_c)\partial_b + x_b \partial_a \partial_c -x_c \partial_a \partial_b - x_b\partial_c\partial_a+x_c\partial_b\partial_a\\ &=\eta_{ab}\partial_c - \eta_{ac}\partial_b - x_c [\partial_a, \partial_b] - x_b [\partial_c, \partial_a]\\ &=\eta_{ab}\partial_c - \eta_{ac}\partial_b \end{align} and those two terms vanish because we get to deal with only those nice functions for which partials commute. Especially if these are plain transcriptions of lectures it is possible that he simply got a bit hurried while writing and he knew what the answer was and he didn’t see his mistake so he simply rattled off a quick term to the effect of “whoops those two terms are not supposed to still be there, they vanish” and simply wrote down the right result. I’ve done that before in recitation sections. 1. The core of OP's problem seems to be the notational ambiguity in what parts/objects that a differential operator acts on, cf. e.g. my Phys.SE answer here. In the operator eq. (4.37), the main point is that the differential operators act to the right on all explicitly & implicitly written objects according to Leibniz product rule. Recall in particular that a differential operator is ultimately supposed to act on a function (in order to produce a result of the operation). 2. Alternatively. one may use the operator rule $$[A,BC]~=~ [A,B]C+ B[A,C],$$ and the fundamental commutators $$[\partial_{\mu},x_{\nu}]~=~\eta_{\mu\nu}, \qquad [\partial_{\mu},\partial_{\nu}]~=~0,$$ to correctly reduce the left-hand side of eq. (4.37). It seems that that was what Das had in mind. (It should be emphasized that the presentation in Das' textbook is correct in its present form.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229385018348694, "perplexity": 497.8391551547237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00082.warc.gz"}
https://dannycaballero.info/phy481msu_f2018/notes/23-slides.html
Consider a solid sphere of charge that has a charge density that varies with $\cos \theta$. What can we say about the terms in the general solution to Laplace's equation outside there sphere? $$V(r,\theta) = \sum_l\left(A_l\,r^l + \dfrac{B_l}{r^{(l+1)}}\right)P_l(\cos \theta)$$ 1. All the $A_l$'s are zero 2. All the $B_l$'s are zero 3. Only $A_0$ should remain 4. Only $B_0$ should remain 5. Something else Note: Correct answer E because B0 and B1 remain <img src="./images/dipole_moment.png" align="left" style="width: 300px";/> Two charges are positioned as shown to the left. The relative position vector between them is $\mathbf{d}$. What is the value of of the dipole moment? $\sum_i q_i \mathbf{r}_i$ 1. $+q\mathbf{d}$ 2. $-q\mathbf{d}$ 3. Zero 4. None of these Note: * CORRECT ANSWER: A ## Multipole Expansion <img src="./images/universe_multipole.jpg" align="center" style="width: 300px";/> Multipole Expansion of the Power Spectrum of CMBR Note: The radiation from cosmic microwave background can be described in terms of contributions using a basis of functions with increasing smaller contributions. <img src="./images/dipole_setup.png" align="left" style="width: 300px";/> Two charges are positioned as shown to the left. The relative position vector between them is $\mathbf{d}$. What is the dipole moment of this configuration? $$\sum_i q_i \mathbf{r}_i$$ 1. $+q\mathbf{d}$ 2. $-q\mathbf{d}$ 3. Zero 4. None of these; it's more complicated than before! Note: * CORRECT ANSWER: A For a dipole at the origin pointing in the z-direction, we have derived: $$\mathbf{E}_{dip}(\mathbf{r}) = \dfrac{p}{4 \pi \varepsilon_0 r^3}\left(2 \cos \theta\;\hat{\mathbf{r}} + \sin \theta\;\hat{\mathbf{\theta}}\right)$$ <img src="./images/small_dipole.png" align="right" style="width: 200px";/> For the dipole $\mathbf{p} = q\mathbf{d}$ shown, what does the formula predict for the direction of $\mathbf{E}(\mathbf{r}=0)$? 1. Down 2. Up 3. Some other direction 4. The formula doesn't apply Note: * CORRECT ANSWER: D * The formula works far from the dipole only. ### Ideal vs. Real dipole <img src="./images/dipole_animation.gif" align="center" style="width: 450px";/>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223611354827881, "perplexity": 1092.0251685632131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00501.warc.gz"}
http://study.com/academy/lesson/simplifying-expressions-with-rational-exponents.html
# Simplifying Expressions with Rational Exponents An error occurred trying to load this video. Try refreshing the page, or contact customer support. Coming up next: How to Graph Cubics, Quartics, Quintics and Beyond ### You're on a roll. Keep up the good work! Replay Your next lesson will play in 10 seconds • 0:05 Expressions with… • 0:41 Examples 1-4 • 2:19 Examples 5-7 • 6:27 Example 8 • 7:17 Lesson Summary Timeline Autoplay Autoplay Create an account to start this course today Try it free for 5 days! #### Recommended Lessons and Courses for You Lesson Transcript Instructor: Kathryn Maloney Kathryn teaches college math. She holds a master's degree in Learning and Technology. Simplifying expressions with rational exponents is so easy. In fact, you already know how to do it! We simply use the exponent properties but with fractions as the exponent! ## Expressions with Rational Exponents Rational exponents follow exponent properties except using fractions. Review of exponent properties - you need to memorize these. Just can't seem to memorize them? Have you tried flashcards? They work fantastic, and you can even use them anywhere! 1. Product of Powers: xa*xb = x(a + b) 2. Power to a Power: (xa)b = x(a * b) 3. Quotient of Powers: (xa)/(xb) = x(a - b) 4. Power of a Product: (xy)a = xaya 5. Power of a Quotient: (x/y)a = xa / ya 6. Negative Exponent: x(-a) = 1 / xa 7. Zero Exponent: x0 = 1 Putting the exponent rules to work with exponent properties... ## Example #1 y(1/2) * y(1/3) For this one, we're going to follow the product of powers. Remember, when we multiply, we add their exponents. 1/2 + 1/3 = 5/6 So the answer is going to be y(5/6). ## Example #2 Simplify: x(3/5) / x(2/3) For this one, we're going to use the quotient of powers. Remember, when we divide, we subtract their exponents. So, we're going to have: x(3/5 - 2/3) 3/5 - 2/3 = -1/15 ## Example #3 Simplify: x(-2/7) For this one, we're going to use the negative exponents property. Remember, when we have a negative exponent, we flip it. If it's in the numerator, we flip it to the denominator, which is in this case. So our answer is going to be 1 / (x(2/7)). ## Example #4 Simplify: (x(4/5))(3/4) In this one, we have power to a power. We're going to have (x(4/5))(3/4), so we're going to multiply 4/5 * 3/4 which is 12/20. We need to reduce our fractions when we're going to get our final answer. 12/20 reduces to 3/5. So our answer is going to be x(3/5). ## Example #5 Putting multiple exponent rules to work with exponent properties... Simplify using positive exponents. Always reduce the fractions to lowest terms. First we're going to simplify the power to a power. So now we'll have: Write like terms over each other, if necessary. Well, we already have the p's over the p's and the q's over the q's. There's no need to simplify fractions now. We're going to go right to simplifying quotient of powers. Remember, when we divide, we subtract their exponents. So we have: p(2/6 - 1/2) * q(6/3 - 1/2) That gives us: p(-1/6) * q(9/6) Next we need to reduce the fractions because we're almost to our answer. So we'll have: p(-1/6) * q(3/2) We want to rewrite these using positive exponents. Remember, if it's negative in the numerator, it flips to the denominator. So our final answer's going to be: q(3/2) / p(1/6) ## Example #6 Simplify using positive exponents. Always reduce the fractions to lowest terms. We're going to have: We're going to simplify power to a power. So we'll have: Remember, power to a power means to multiply the exponents. Next, let's write like terms over each other. We already have 23 over 82 and m(6/3) over m(2/6). So let's move to the next step. There's no need to simplify fractions just yet, so we're going to simplify quotient of powers. Remember, when we divide, we subtract. So now we're going to have: 8/64 * m(6/3 - 2/6) Well, 8/64 is 1/8. m to the 6/3 - 2/6 is m to the 10/6. So it turns out that our final answer is: m(5/3) / 8 We won't touch the improper fraction in this video. We're just simplifying rational exponents. ## Example #7 Simplify using positive exponents. Always reduce fractions to lowest terms. First we're going to simplify power to a power. Remember, power to a power means to multiply the exponents. That'll give us: To unlock this lesson you must be a Study.com Member. ### Register for a free trial Are you a student or a teacher? Back Back ### Earning College Credit Did you know… We have over 79 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229892253875732, "perplexity": 2434.481076006515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00533-ip-10-145-167-34.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/315318/relative-equivariant-cohomology
Relative equivariant cohomology Let us assume that $$X=\mathbb{R}\times S^1$$ is given with a $$G=\mathbb{Z}_2$$ action that corresponds to the symmetry $$(x,e^{i\theta})\mapsto(-x,e^{-i\theta})$$. I want to compute the equivariant cohomology of $$X$$ relative to the fixed point set $$X^G=\{(0,-1),(0,1)\}$$. Since the subspace $$X^G$$ is $$G$$-invariant, there is a natural generalization of Borel's construction to the relative equivariant cohomology $$H^*_G(X, X^G) = H^*(X\times_G EG,X^G \times_G EG).$$ And we know that this has an associated long exact sequence $$\cdots \rightarrow H^n_G(X, X^G) \rightarrow H^n_G(X) \rightarrow H^n_G(X^G) \rightarrow H^{n+1}_G(X, X^G)\rightarrow \cdots.$$ How do we really calculate $$H^n_G(X)$$? When we take a field like $$F_2$$ with two elements, one can at least take advantage of Smith theory to conclude that since the cochain $$C^*(X, X^G)$$ is $$G$$-free, the relative equivariant cohomology can be identified with the cohomology of the subcomplex of invariants, which vanishes above the dimension of $$X$$ [1]. Can anybody help me with the lower relative cohomology groups? [1] R.B. Sher, R.J. Daverman, Handbook of Geometric Topology, North Holland, 1st ed., p. 13 Best, AB • Your pair is equivariantly homotopy equivalent to $S^1$ with the reflection map and the same fixed point set, so you will get the same relative cohomology. Now by hand you can see that the relative Borel construction gives you in this case the suspension of $EG$. All you have is something in degree 0. – Mike Miller Nov 14 '18 at 18:50 • @MikeMiller So you're saying that all I have is the equivariant cohomology of the suspension of $S^\infty$ that due to being contractible is again that of $S^\infty$? – Alireza Behtash Nov 14 '18 at 19:56 The relative Borel homology of a pair $$(X,A)$$ of $$G$$-spaces is well-defined up to equivariant homotopy equivalence (actually, up to equivariant maps which are nonequivariant homotopy equivalences of pairs). So we may reduce your example to $$(S^1, \pm 1)$$ with the reflection action. Now $$(S^1, \pm 1)$$ has two fixed points and two free arcs that are sent to one another by the involution. The Borel construction $$S^1 \times_{\Bbb Z/2} E(\Bbb Z/2)$$ looks like a copy of $$B(\Bbb Z/2)$$ above the two fixed points and an arc's worth of copies of $$E(\Bbb Z/2)$$; collapsing that arc, this Borel construction is homotopy equivalent to $$B(\Bbb Z/2) \vee B(\Bbb Z/2)$$. The relative Borel construction just says "collapse those two fibers that look like $$B(\Bbb Z/2)$$". So what you're left with is the suspension of $$E(\Bbb Z/2)$$. So $$H^*_G(S^1, \pm 1;R)$$ is a copy of the coefficient ring $$R$$ concentrated in degree zero. This is consistent with the relative long exact sequence you mention: $$(\pm 1) \times_{\Bbb Z/2} E(\Bbb Z/2)$$ gives you $$B(\Bbb Z/2) \sqcup B(\Bbb Z/2)$$, and the map $$(\pm 1) \times_{\Bbb Z/2} E(\Bbb Z/2) \to S^1 \times_{\Bbb Z/2} E(\Bbb Z/2)$$ is just wedging the two copies of $$B(\Bbb Z/2)$$ together. In particular, the map $$H^*_G(\pm 1) \to H^*_G(S^1)$$ is an isomorphism in all degrees greater than 2, and is the map $$R \oplus R \to R$$ given by addition in degree zero; its cokernel is $$H^0_G(S^1, \pm 1;R) = R$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833013415336609, "perplexity": 110.38728965579504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00583.warc.gz"}
http://mathhelpforum.com/trigonometry/60224-trigonometry-help.html
# Math Help - Trigonometry Help! 1. ## Trigonometry Help! Question: Find the solution of $\sin{x}-1=0$ if $0 \leq x < \frac{1}{2}\pi$. Attempt: $\sin{x}-1=0$ $\sin{x} = 1$ $x = \sin^{-1}{(1)}$ $x = 90^o$ $x = \frac{1}{2}\pi$ Either my answer is wrong or the question is wrong because it was mentioned in the question that it is not greater than $\frac{1}{2}\pi$. Am I correct? 2. You're correct, so the answer is that no real number solves this equation on the given interval.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110075235366821, "perplexity": 320.7820541278786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672441.2/warc/CC-MAIN-20151001215752-00198-ip-10-137-6-227.ec2.internal.warc.gz"}
https://repository.uantwerpen.be/link/irua/107718
Publication Title Observation of Z decays to four leptons with the CMS detector at the LHC Author Institution/Organisation CMS Collaboration Abstract The first observation of the Z boson decaying to four leptons in proton-proton collisions is presented. The analyzed data set corresponds to an integrated luminosity of 5.02 fb(-1) at root s = 7 TeV collected by the CMS detector at the Large Hadron Collider. A pronounced resonance peak, with a statistical significance of 9.7 sigma, is observed in the distribution of the invariant mass of four leptons (electrons and/or muons) with mass and width consistent with expectations for Z boson decays. The branching fraction and cross section reported here are defined by phase space restrictions on the leptons, namely, 80 < m(4l) < 100 GeV, where m(4l) is the invariant mass of the four leptons, and m(ll) > 4 GeV for all pairs of leptons, where m(ll) is the two-lepton invariant mass. The measured branching fraction is B(Z -> 4(l)) = 4.2(-0.8)(+0.9)(stat.) +/- 0.2(syst.)) x 10(-6) and agrees with the standard model prediction of 4.45 x 10(-6). The measured cross section times branching fraction is sigma(pp -> Z) B (Z -> 4l) = 112(-20)(+23)(stat.) +/-(+7)(-5)(syst.)(-2)(+3) (lumi.) fb, also consistent with the standard model prediction of 120 fb. The four-lepton mass peak arising from Z -> 4l decays provides a calibration channel for the Higgs boson search in the H -> ZZ -> 4l decay mode. Language English Source (journal) Journal of high energy physics. - Bristol Publication Bristol : 2012 ISSN 1126-6708 1029-8479 [online] Volume/pages 12(2012), p. 1-28 Article Reference 034 ISI 000313123800034 Medium E-only publicatie Full text (Publisher's DOI) Full text (open access) UAntwerpen Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959529042243958, "perplexity": 4111.853819044282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426951.85/warc/CC-MAIN-20170727022134-20170727042134-00500.warc.gz"}
http://www.mathworks.com/help/pde/heat-transfer-and-diffusion-equations.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. # Heat Transfer and Diffusion Solve PDEs that model heat transfer or other diffusion processes in 2-D space The heat transfer and diffusion equations are parabolic partial differential equations, $\frac{\partial u}{\partial t}-\alpha {\nabla }^{2}u=0$. The heat equation describes the heat transfer for plane and axisymmetric cases in a given region over given time: $\rho C\frac{\partial T}{\partial t}-\nabla \text{\hspace{0.17em}}\cdot \text{\hspace{0.17em}}\left(k\nabla T\right)=Q+h\left({T}_{\text{ext}}-T\right)$ where ρ is density, C is heat capacity, k is the coefficient of heat conduction, Q is the heat source, h is convective heat transfer coefficient, and Text is external temperature. The term h(Text – T) is a model of transversal heat transfer from the surroundings. You can use it to model heat transfer in thin cooling plates. You can use Dirichlet boundary conditions specifying the temperature on the boundary, or Neumann boundary conditions specifying heat flux $n\cdot \left(k\nabla T\right)$. You also can use generalized Neumann boundary conditions $n\cdot \left(k\nabla T\right)+qT=g$, where q is heat transfer coefficient. The generic diffusion equation has the same structure as the heat equation: $\frac{\partial c}{\partial t}-\nabla \text{\hspace{0.17em}}·\text{\hspace{0.17em}}\left(D\nabla c\right)=Q$ where c is the concentration of particles, D is the diffusion coefficient and Q is the source density (source per area). You can use Dirichlet boundary conditions specifying the concentration on the boundary, or Neumann boundary conditions specifying the flux $n\cdot \left(D\nabla c\right)$. You also can specify a generalized Neumann condition $n\cdot \left(D\nabla c\right)+qc=g$, where q is transfer coefficient, and g is flux. ## Apps PDE Solve partial differential equations in 2-D regions ## Topics ### Programmatic Workflow Nonlinear Heat Transfer In a Thin Plate Perform a heat transfer analysis of a thin plate. Heat Equation for Metal Block with Cavity Use command-line functions to solve a heat equation that describes the diffusion of heat in a metal block with a rectangular cavity. Heat Distribution in a Circular Cylindrical Rod Analyze a 3-D axisymmetric model by using a 2-D model. ### PDE App Workflow Heat Transfer Between Two Squares Made of Different Materials: PDE App Solve a heat transfer problem with different material parameters. Heat Equation for Metal Block with Cavity: PDE App Use PDE app to solve a heat equation that describes the diffusion of heat in a metal block with a rectangular cavity. Heat Distribution in a Circular Cylindrical Rod: PDE App Solve a 3-D parabolic PDE problem by reducing it to 2-D using coordinate transformation. ### Concepts Parabolic Equations Mathematical definition and discussion of the parabolic equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206487894058228, "perplexity": 1019.9341200551515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00275-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/21888/when-is-every-solid-perfect-complex-faithful
# When is every “solid” perfect complex faithful? Let $R$ be a noetherian commutative ring. Consider $D^{perf}(R)=K^b(R-proj)$ the category of bounded complexes of finitely generated projective $R$-modules, with maps of complexes up to homotopy. QUESTION: For which $R$ is it true that every complex $E \in D^{perf}(R)$ whose support is the whole spectrum, $Supph(E)=Spec(R)$, is also $\otimes$-faithful? Explanations: (1) For simplicity, let us call an object $E$ "solid" if it is supported everywhere: $Supph(E)=Spec(R)$. This means that for every $P\in Spec(R)$, the object $E_P\in K^b(R_P-proj)$ is non-zero (i.e. not null-homotopic). Equivalently, the image of $E$ in $D(\kappa(P))$ is non-zero for every residue field $\kappa(P)$. Equivalently again, the ordinary support of the total homology of $E$ is the whole $Spec(R)$, as a finitely generated $R$-module. ($Supph$ is for support of the $h$omology and is called the "homological support".) (2) An object $E$ in a tensor-category can be called "$\otimes$-faithful" (simply "faithful") if $-\otimes E$ is a faithful functor, i.e., for every map $f:E'\to E''$, the map $f\otimes E: E'\otimes E\to E''\otimes E$ is zero only if $f$ is zero. (3) It is an exercise on rigid tensor-triangulated categories to show that $E\in D^{perf}(R)$ is faithful if and only if the unit $\eta:R\to E\otimes E^\vee$ is split injective, where $E^\vee$ is the dual of $E$ and of course $R=R[0]=\cdots 0\to 0\to R\to 0\to 0\cdots$ is the unit of the tensor on $D^{perf}(R)$. (Use that $\eta$ is split injective after applying $-\otimes E$ by the unit-counit relation.) Split injectivity of $\eta:R\to E\otimes E^\vee$ forces $E$ to be solid. In short, we always have faithful$\implies$ solid and the question is: For which $R$ is the converse true? (4) It is well-known (and non-trivial) that a map $f:E'\to E''$ in $D^{perf}(R)$ which goes to zero in every residue field must be $\otimes$-nilpotent: $\exists\ n>0$ such that $f^{\otimes n}=0$. So, if $f\otimes E=0$ with $E$ solid, one can conclude $f^{\otimes n}=0$ for some $n>0$ but not necessarily $f=0$ in general. (5) Note that it is necessary for "solid$\implies$ faithful" that $R$ be reduced. Indeed, let $a\in R$ with $a^2=0$ and let $E=cone(a)$ be the complex $E=\cdots0\to 0\to R\overset{a}\to R\to 0\to 0\cdots$. Then $f=a:R\to R$ satisfies $f\otimes E=0$ and $E$ is solid. (6) Reduced is not enough though. I've an example of a non-faithful solid complex over $R=k[T^2,T^3]\subset k[T]$ ($k$ a field). (7) When $R$ is Dedekind, every solid complex has a non-zero projective summand, hence is faithful. So, that is a class of examples where solid does imply faithful. (8) In particular, I would be very interested in a regular ring $R$ with a non-faithful solid object. (Call that a conjecture in disguise, if you want.) References: M. Hopkins, "Global methods in homotopy theory", Homotopy theory (Durham, 1985), 73--96. A. Neeman, "The chromatic tower for $D(R)$", Topology 31 (1992), no. 3, 519--532. Article (requires access). R. Thomason, "The classification of triangulated subcategories", Compositio Math. 105 (1997), no. 1, 1--27. Article (requires access). - Very nice question! –  Hailong Dao Apr 20 '10 at 17:35 Doesn't (4) give an equivalent condition? ie $E$ is solid if and only if for every $f$, $f\otimes E=0$ implies that for some $n$, $f^{\otimes n}=0$. (The converse direction follows since $E\otimes k(p)\not=0$.) –  Don Stanley Apr 22 '10 at 7:05 Correct: (4) is equivalent to solid. The question is: For which rings R does "E solid" imply "E faithful". It is a question about R –  Paul Balmer Apr 22 '10 at 15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960654616355896, "perplexity": 176.1921850936033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894275.63/warc/CC-MAIN-20140722025814-00201-ip-10-33-131-23.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/254024/number-of-distinguishable-ways-a-ten-digit-number-can-be-arranged
# Number of distinguishable ways a ten digit number can be arranged? If there is a ten digit number (like a US 10 digit telephone number) such as 7177425231, how many different ways can the digits be rearranged? I understand how you would go about finding this if you could only use the digits 0-9 once in the ten digit number, but since this example has some digits out of 0-9 which are missing, and some digits which are repeated (like 7) I'm not sure how you would go about doing this? - In addition to the answer you received, you might want to review this entire paper and do all the problems, especially problem 1.18. newagepublishers.com/samplechapter/001488.pdf – Amzoti Dec 8 '12 at 21:57 The locations for the $7$'s can be chosen in $\dbinom{10}{3}$ ways. For each such choice, the locations of the $2$'s can be chosen in $\dbinom{7}{2}$ ways. Once this is done, the locations of the $1$'s can be chosen in $\dbinom{5}{2}$ ways. The singletons can then be arranged in $3!$ ways. Multiply. There is nothing special about the order in which this was done. We could say that the location of the $3$ can be any of $10$ places, and then the location of the $4$ can be any of $9$, and then the location of the $5$ any of $8$. Then the locations of the $1$'s can be chosen in $\dbinom{7}{2}$ ways, and the locations of the $2$'s in $\dbinom{5}{2}$ ways. The $7$'s now must go into the empty places. Another way: Paint the repeated numbers so as to make them distinguishable. There are $10!$ ways to arrange the new version of the digits. Now unpaint the $7$'s. Always $3!$ of the old painted arrangements collapse into one. So we need to divide by $3!$. Unpaint the $1$'s, and the $2$'s. That divides the number by another $2!2!$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171727299690247, "perplexity": 126.53229189127201}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861735203.5/warc/CC-MAIN-20160428164215-00140-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/speed-of-refracted-waves-in-different-mediums.316128/
Speed of refracted waves in different mediums 1. May 24, 2009 taylorboss_k 1. The problem statement, all variables and given/known data two vibrating sources emit waves in the same elastic medium. the first source has a frequency of 25 hz, while the frequency of the second source is 75hz. waves from the first source have a wavelength of 6.9 m. they reflect from a barrier back into the original medium, with an angle of relfection of 25 degrees. waves from the second source refreact into a different medium with an angle of incidence of 35 degrees. the speed of the refracted waves is observed to be 96 m/s. 2. Relevant equations wat is the is the speed of the reflected waves from the first source in the orignal medium? 3. The attempt at a solution you cannot tell since it has most likely changed speed after refraction? Im doing a correspondance course and learning out of a book is retarded. Can someone explain this to me please? 2. May 25, 2009 Redbelly98 Staff Emeritus Re: refraction? Welcome to Physics Forums. There seems to be a lot of unnecessary information given here. But they do tell us what the first wave's frequency and wavelength are. What equation can you use to get the speed? Last edited: May 26, 2009 3. May 25, 2009 taylorboss_k Re: refraction? Darn it. I posted the right problem but the wrong question. the right question is... what is the speed of the waves from the second source in the original medium? 4. May 25, 2009 Redbelly98 Staff Emeritus Re: refraction? It appears there is not enough information to solve this problem, unless we assume the same speed for the two frequencies. I fail to see the relevance of the 25 degree angle of reflection for the first wave. Was there a figure accompanying this problem? Perhaps it contains other information, or a better description of the situation. 5. May 25, 2009 taylorboss_k Re: refraction? Thats all the information given. No figures. Its got me all confused. 6. May 26, 2009 Redbelly98 Staff Emeritus Re: refraction? It seems we should assume the two wave speeds are the same, which is generally true anyway. In that case, my comment in post #2 applies. Similar Discussions: Speed of refracted waves in different mediums
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283360242843628, "perplexity": 868.2985965369057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822930.13/warc/CC-MAIN-20171018104813-20171018124813-00522.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-14-acids-and-bases-active-learning-questions-page-701/2
## Chemistry 9th Edition Strength refers to the percent of ionization of an acid or base while concentration refers to the number of moles of acid or base per liter of solution, $HCl$ is a strong acid and can never be described as a weak acid. $HCl$ is concentrated when there is a large number of moles of $HCl$ per liter of solution and dilute when there is a small number of moles of $HCl$ per liter of solution. Ammonia is a weak base and never a strong base. Ammonia is concentrated when there is a large number of moles of Ammonia per liter of solution and dilute when there is a small number of moles of Ammonia per liter of solution. A conjugate base of a weak acid a strong base.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216620683670044, "perplexity": 467.4674211357734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00030.warc.gz"}
http://repub.eur.nl/pub/16054/
In this paper we propose a solution to the non-robustness that plagues the estimation of inverted U-shaped relationships using panel data, such as the relation between pollution and income. When dependent and independent variable (like pollution and income) are both time related, separating the effect of the independent variable from time effects brings about a fundamental identification dilemma: the imposition of restrictions on the controls might drive the shape of the relationship between the dependent (pollution) and independent (income) variables. Our solution consists of imposing the very weak constraint that arbitrary cross-sectional units have the same relationship. We apply our methodology to two widely studied cases, namely, SO2 and CO2 emissions. Interestingly, our estimates are insensitive to the required subjective choices, but also strongly differ from the literature so far. We find consistent positive income effects for both cases and time effect estimates with a clear U-shaped trend for SO2-emissions but only slightly so for CO2-emissions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90833979845047, "perplexity": 1080.3698752863181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281115.59/warc/CC-MAIN-20150827031441-00084-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/47779/user47779?tab=summary
user47779 Reputation Top tag Next privilege 20 Rep. Talk in chat 2 Impact ~2k people reached • 0 posts edited Questions (4) 2 Which one of the following is true for all vectors $u$, $v$ and $w$ where $u\cdot v = 0$ and $u\cdot w = 0$? 2 Describing the intersection of two planes 0 Which one of the following vectors in R^3 is a unit vector that is parallel to the plane with general equation 2x+2y+z=1? -4 Matrix addition, multiplication and inversion problem Reputation (16) This user has no recent positive reputation changes This user has not answered any questions Tags (2) 0 vector-spaces × 3 0 matrices Account (1) Mathematics 16 rep 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376420736312866, "perplexity": 1292.354834196132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00036-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/neural-network-based-controller.698538/
Neural Network based controller 1. Jun 24, 2013 date.chinmay I'm working on a project which deals with temperature control of a room. the idea is to control temperature within a limit. I have prior data which i used to train a neural network which is 99.5% accurate. there are 5 inputs (x , y , z , A , B) and there is one output (T). Now I want to use this network in a control architecture of some kind. Any suggestions? which should I use? P.S. The system is highly complex, non linear and dynamic in nature. I have already tried standard inversion of system but it doesn't work as outputs are lesser than inputs. I'm doing this in Matlab. 2. Jun 24, 2013 DScheuf 3. Jun 24, 2013 date.chinmay 3 coordinates of a point.. the fan speed at that point.. and the dissipated heat at that point XYZ,S,H Similar Discussions: Neural Network based controller
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616808652877808, "perplexity": 955.046518631111}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812959.48/warc/CC-MAIN-20180220125836-20180220145836-00495.warc.gz"}
http://math.stackexchange.com/questions/286252/is-this-set-a-regular-surface
# is this set a regular surface? I'm reading "Differential Geometry of Curves and Surfaces of Manfredo Docarmo" I'm doing the exercises of the chapter 2. Here is the definition of regular surface that we are following: I have problems with this exercise: Is with the first part, I think that it's not true that it's a regular surface (the second it is). I don't know how to prove that something is not a regular surface. - What happens if you take the point $p$ on the boundary of the unitary disc? How would you construct the homeomorphism? – Sigur Jan 24 '13 at 23:50 You're correct that the first one is not a surface. Here's how to check that condition $2$ fails. Take a point $(x,y,0)$ in the boundary, that is where $x^2 + y^2 = 1$. Take any open neighborhood $V$ of this, and intersect $V \cap S$. This set $V \cap S$ needs to be homeomorphic to an open set $U$. This would imply that $V \cap S$ was itself open. Can you see why this cannot be the case? I think that your idea is good, but not your conclusion, because you are saying that only because $V\cap S$ is homeomorphic to $U$ then $V\cap S$ is open in $\Bbb R^3$, but that it's not true, because the homeomorphism it's between the sets $U, V\cap S$ and the range is not $\Bbb R^3$ so the map is effectively an open map, but only between those two sets. (So $\phi(U)= V\cap S$ is open in $V\cap S$ but that is obvious in this case , where $\phi$ is the homeomorphism) – Daniel Jan 25 '13 at 21:08 For example, take the sphere $S^1$ , we now that this set it's a regular surface, and also we know that has empty interior. Let's take $p\in S^1$ and take a neighborhood $V$ of $p$ and then consider $V\cap S^1$ clearly this set it's no open in $\Bbb R^3$ (in fact it has empty interior), so under that conclusion the sphere would not be a regular surface. At least that it's how I understood your answer. – Daniel Jan 25 '13 at 21:36 $V \cap S$ is open in its own topology -- the subset topology. We can forget about $\mathbb{R}^3$ here. – Isaac Solomon Jan 26 '13 at 0:29 If you check, you'll see that $V \cap S$ cannot actually be open in its own topology. – Isaac Solomon Jan 27 '13 at 2:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714830875396729, "perplexity": 124.89740930538845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00111-ip-10-164-35-72.ec2.internal.warc.gz"}
http://cognet.mit.edu/journal/10.1162/NECO_a_00578
## Neural Computation May 2014, Vol. 26, No. 5, Pages 907-919 (doi: 10.1162/NECO_a_00578) © 2014 Massachusetts Institute of Technology Prewhitening High-Dimensional fMRI Data Sets Without Eigendecomposition Article PDF (297.45 KB) Abstract This letter proposes an algorithm for linear whitening that minimizes the mean squared error between the original and whitened data without using the truncated eigendecomposition (ED) of the covariance matrix of the original data. This algorithm uses Lanczos vectors to accurately approximate the major eigenvectors and eigenvalues of the covariance matrix of the original data. The major advantage of the proposed whitening approach is its low computational cost when compared with that of the truncated ED. This gain comes without sacrificing accuracy, as illustrated with an experiment of whitening a high-dimensional fMRI data set.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191427230834961, "perplexity": 735.6733001686479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347405558.19/warc/CC-MAIN-20200529152159-20200529182159-00354.warc.gz"}
https://rooms-4you.com/find-the-exact-value-csc225/
# Find the Exact Value csc(225) Apply the reference angle by finding the angle with equivalent trig values in the first quadrant. Make the expression negative because cosecant is negative in the third quadrant. The exact value of is . The result can be shown in multiple forms. Exact Form: Decimal Form: Find the Exact Value csc(225) ## Try our mobile app Our app allows students to get instant step-by-step solutions to all kinds of math troubles. Scroll to top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886629343032837, "perplexity": 1962.4477813551368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00079.warc.gz"}
https://www.physicsforums.com/threads/linear-algebra.210901/
# Linear Algebra 1. Jan 24, 2008 ### fk378 1. The problem statement, all variables and given/known data Choose h and k such that the system has a) no solution b) a unique solution and, c) many solutions. Give separate answers for each part. x1 + hx2=2 4x1+8x2=k 2. Relevant equations 3. The attempt at a solution I set up the matrix [1 h 2 ] [4 8 k ] and I multiplied the top row by -4 and added the 3rd row to it to get [ 1 ----- h ----- 2 ---] [ 0 (-4h+8) (-8+k) ] To get part A) -no solution-, I figured that the 2nd column cannot equal 0, therefore if h=2, and k does not equal 8, then there is no solution. Here is where I get lost... I didn't get these answers that the book says: For part B), unique solution, h does not equal 2....but what does that mean? If h doesn't equal 2 then that means it can take on any value besides 2, therefore k can be almost any value as well. For part C), many solutions, that books says h=2 and k=8 So that means for the 2nd row you would get 0=0....so wouldn't we just disregard that? then we are left with the 1st row saying 1(x1)+2(x2)=2 I don't understand parts B and C. Another question: Am I right to assume that we're just looking at -4h+8 here? Or should I also be considering -8+k?? Thanks! 2. Jan 24, 2008 ### Rainbow Child Do you know about determinants and Cramer's rule? 3. Jan 24, 2008 ### fk378 Nope I do not. 4. Jan 24, 2008 ### Rainbow Child Ok! Solve the first equation for $$x_1$$ and substitude the result into the second one. In order to solve the reduced equation for $x_2$ it's coefficient must be non-zero. 5. Jan 24, 2008 ### Defennder If you have learned what is reduced-row echelon forms for a matrix, then you would observe that in order for the unknowns to be solvable for 1 solution, then the reduced-row echelon form of the matrix is the identity matrix. You do know how to use elementary row operations, do you? For the part C, note that is required that you obtain a row of 0s in your matrix for its reduced-row echelon form. 6. Jan 25, 2008 ### HallsofIvy Staff Emeritus That means exactly the opposite of what you said in part A. "If h= 2 and k does not equal 8, then there is no solution" If h is any number other than 2, then you can solve for x2 and so for x1. You will have exactly one solution. WHY did you say in part A "and k does not equal 8, then there is no solution"? In part A, if h= 2 and k is not 8 then you have, for the last row, 0 0 8-k, which is equivalent to the equation 0x1+ 0x2= 8-k. If 8-k is not 0, there are NO values of x1 and x2 which will satisfy that. If 8-k= 0, you have 0 0 0, equivalent to 0x1+0x2= 0 which is true for ALL x1, x2. You are, then left with the 1st row as you say. You could pick any number for, say, x1 and solve for x2: there are an infinite number of solutions. To answer what question? As long as -4h+ 8 is not 0, then there exist a unique solution no matter what -8+ k is. But is -4h+ 8= 0, then -8+ k becomes important. If, in that case, -8+ k= 0, there are an infinite number of solutions. If -8+ k is not 0, then there exist NO solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126632809638977, "perplexity": 757.5541899927217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00228-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/251217/using-the-canonical-markov-property-to-prove-an-obvious-fact-about-markov-chains
# Using the canonical Markov property to prove an obvious fact about Markov chains Given a Markov chain $\{X_n: n \geq 1 \}$, such that $$\mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n) = \mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n, \ldots X_1= x_1)$$ How can I formally prove that: $$\mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n) = \mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n, X_k = x_k )$$ for $k = 1 \ldots N-1$ - It has been a while since I've done anything with Markov chains, so I apologize for any poor notation in advance. Notice that $\{X_n=x_n, X_k=x_k\}$ is $\sigma$-measurable from $\{X_n=x_n, \ldots,\ X_1=x_1 \},$ for $k< n$. Really the Markov property for finite state spaces is equivalent to $P(X_{n+1}=x_{n+1}\,|\, \sigma\{X_n,\, \ldots,\, X_1 \} )=P(X_{n+1}=x_{n+1}|\, X_n=x_n).$ - The simplest (minimally measure-theoretic) approach that I found is the following: First, define a history of the states in all periods except for periods $k$, $n$ and $n+1$ as follows $$H = \Big\{ X_1(\omega_1) = x_1, \ldots, X_{k-1}(\omega_{k-1}) = x_{k-1}, X_{k+1}(\omega_{k+1}) = x_{k+1}, \ldots X_{n-1}(\omega_{n-1}) = x_{n-1} \ \Big| \ \omega_1 \in \Omega_{1},\omega_{k-1} \in \Omega_{k-1},\omega_{k+1} \in \Omega_{k+1},\omega_{n-1} \in \Omega_{n-1} \Big \}$$ where $h$ denote a particular realization of $H$. Using, this we can see that: \begin{align} \mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n, X_k = x_k ) &= \sum_{h \in H} \mathbb{P}(X_{n+1} = x_{n+1}, \ h \ | \ X_n = x_n, X_k = x_k) \\ &= \sum_{h \in H} \mathbb{P}(X_{n+1} = x_{n+1} \ | \ h, X_n = x_n, X_k = x_k) \mathbb{P}(h \ | \ X_n = x_n, X_k = x_k) \\ \end{align} Now we note that: \begin{align} \mathbb{P}(X_{n+1} = x_{n+1} \ | \ h, X_n = x_n, X_k = x_k) &= \mathbb{P}( X_{n+1} = x_{n+1} \ | \ X_n = x_n, X_{n-1} = x_{n-1},\ldots, X_1 = x_1 ) \\ &= \mathbb{P}( X_{n+1} = x_{n+1} \ | \ X_n = x_n) \end{align} by the definition of $h$ and the Markov Property. Similarly, \begin{align} \mathbb{P}(h \ | \ X_n = x_n, X_k = x_k) &= \mathbb{P}( X_{n-1} = x_{n-1}, \ldots, X_{k+1} = x_{k+1}, X_{k-1} = x_{k-1}, \ldots, X_1 \ | \ X_n = x_n, X_k = x_k ) \\ &= \mathbb{P}( X_{n-1} = x_{n-1}, \ldots, X_{k+1} = x_{k+1}, X_{k-1} = x_{k-1}, \ldots, X_1 ) \\ & = \mathbb{P}(h) \end{align} Substituting these expressions in, we get that: \begin{align} \mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n, X_k = x_k ) &= \sum_{h \in H} \mathbb{P}(X_{n+1} = x_{n+1} \ | \ h, X_n = x_n, X_k = x_k) \mathbb{P}(h \ | \ X_n = x_n, X_k = x_k) \\ &= \mathbb{P}(X_{n+1} = x_{n+1} \ | X_n = x_n) \sum_{h \in H} \mathbb{P}(h) \\ &= \mathbb{P}(X_{n+1} = x_{n+1} \ | X_n = x_n) \\ \end{align} which proves the desired result. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000009536743164, "perplexity": 784.4344915324889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021872753/warc/CC-MAIN-20140305121752-00059-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/27697/what-is-the-gromov-witten-potential-associated-to-string-topology?answertab=votes
# What is the Gromov-Witten potential associated to String Topology? Kevin Costello's article on the Gromov-Witten potential associated to a TCFT constructs for each TCFT, i.e. a functor from chains on Riemann surfaces with boundary to chain complexes satisfying certain conditions, a canonical formal power series $D$ with coefficients in a certain Fock space $\mathcal{F}$, which is constructed from the chain complex $V$ that a TCFT associates to the circle. When one can construct the Gromov-Witten invariants for a manifold, we get a TCFT from Gromov-Witten theory. In that case a certain choice of polarization allows us to identify this potential $D$ with the Gromov-Witten potential. This potential encodes the intersection numbers of $\Psi$-classes and the fundamental class. According to Costello's earlier article on TCFT's and $A_\infty$ Calabi-Yau categories the constructions of string topology allow us to define a TCFT for each oriented closed manifold $M$ (basically the chain level operations of Godin's operations in homology). One also gets a Gromov-Witten potential in this case. Is there an easier expression known for this potential? What geometric information does it encode? - I think that these are all open questions. In principle, following Costello, we can write down a GW potential for any given TCFT or Calabi-Yau A-infty category, but I don't think this has been done yet for any concrete examples whatsoever, or at least nothing has been written up. I would be very happy, though, if I were wrong. –  Kevin H. Lin Jun 10 '10 at 19:07 Does the Costello's construction of B model agree with Kontsevich-Baranikov's one in genus 0 (asking by chance)? –  HYYY Jun 11 '10 at 17:32 Here's how I understand the situation(I'm not very deep and might well be wrong) --- Costello's paper explains how to construct the Gromov Witten potential from a compact A(infinity) Calabi Yau category given two little additional conditions 1) the Hodge to de-Rham spectral sequence degenerates and 2) the induced pairing as defined in his first paper on $HH_*$ is non degenerate. This is explained for example on the bottom of page 9... None of these conditions are satisfied for string topology of a manifold(probably never somehow). The smoothness for example breaks pretty clearly --- think about $S^n$, C*(X) is $Q[x]/x^2$ with no higher operations.) The calculation of the HH to cyclic spectral sequence is a tad easier for odd spheres--- it is the case of a free commutative algebra on an odd variable which you can find in say Loday's book. HH is differential forms on the superspace $R(0,1)$ and normally the Connes operator acts by de Rham d extended to superdifferential forms(I haven't checked this little part but it's the only thing that makes any sense...)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081964254379272, "perplexity": 619.7530215171901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997900573.25/warc/CC-MAIN-20140722025820-00219-ip-10-33-131-23.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/30845/derivation-of-backward-differentiation-formulasbdf/30855
# Derivation of backward differentiation formulas(BDF) I have been reading upon numerical techniques that are used to solve stiff ordinary differential equations. From the description given here, I could follow the steps till equation (5). I am finding it difficult to understand how equation 6 is obtained and the calculations involved in computing the coefficients of BDF. Could someone recommend a reference in which these steps have been detailed? • This is probably more suited to math.se – Nox Jan 6 '19 at 17:22 • I think there won’t be a reference because it’s just calculus: the lhs is a function of t, the rhs is a polynomial in t, so you just differentiate both to get a linear equation in $y_n$, solve for it and read off the coefficients of the other y’s. – Kirill Jan 6 '19 at 17:50 • Actually, there is no solve involved. One directly reads off the coefficients for all $y_{n-j}$s so that $f(t_n)=\dot y(t_n)$ is exactly approximated by the polynomial (up to order $k$). – Jan Jan 8 '19 at 14:03 In BDF schemes for $$\dot y = f$$, one uses $$f(t_n)=\dot y(t_n)$$ and tries to approximate $$\dot y(t_n)\approx \sum_{j=0}^k\alpha_k y_{n-j}$$ by the current value $$y_n$$ (that is to be computed) and the $$k$$ previously computed approximations. In the presented approach, in $$(5)$$, $$y$$ is approximated as a polynomial $$p$$ in $$t$$ fitted to $$y_{n-j}$$, so that the time derivative of the polynomial at $$t_n$$ approximates $$\dot y(t_n)$$ as desired. With that, for a given approximation order $$k$$, one can read of the coefficients $$\alpha_j$$ by evaluating $$\dot p$$ at $$t_n$$: For $$k=1$$: $$\quad \dot y(t_n) \approx \dot p(t_n) = \frac{1}{h}(y_n - y_{n-1})$$ which gives that $$h\dot y(t_n)$$ is approximated by $$1\cdot y_n + (-1)\cdot y_{n-1}.$$ For $$k=2$$ the terms read: $$k=2: \quad \dot y(t_n) \approx \dot p(t_n) = \frac{1}{h}(y_n - y_{n-1})+\frac{1}{2h^2}[(t_n-t_n)\nabla^2y_n + (t_n-t_{n-1})\nabla^2y_n]$$ which, with $$t_n-t_{n-1}=h$$ and $$\nabla^2y_n = y_n - 2y_{n-1} + y_{n-2}$$ gives that $$h\dot y(t_n)$$ is approximated by $$\frac{3}{2}\cdot y_n + (-2)\cdot y_{n-1} + \frac{1}{2}y_{n-2}$$. And so on... • If my understanding is right, the interpolating polynomial is in Newton's form, $p(t)=y[t_1] + y[t_1,t_2](t-t_1)+y[t_1,t_2,...,t_k](t-t_1)(t-t_2)..(t-t_{k-1})$ – Natasha Jan 9 '19 at 4:19 • yes, if you say it. I wasn't sure and I did not check the definition. – Jan Jan 9 '19 at 7:22 • How does one choose the intial data in the best way? – Emil Jan 9 '19 at 19:19 • Maybe my comment was off topic, found this anyway: math.stackexchange.com/a/421107/68036 (predictor-corrector) – Emil Jan 9 '19 at 19:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645603060722351, "perplexity": 339.25587572324525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735600.89/warc/CC-MAIN-20201204101314-20201204131314-00653.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/66859/HTML/default/statug_seqdesign_details58.htm
### Applicable Regression Parameter Tests and Sample Size Computation Subsections: The SEQDESIGN procedure provides sample size computation for tests of a regression parameter in three regression models: normal regression, logistic regression, and proportional hazards regression. To test a parameter in a regression model, the variance of the parameter estimate is needed for the sample size computation. In a simple regression model with one covariate X1, the variance of is inversely related to the variance of X1, . That is, for the normal regression and logistic regression models, where N is the sample size, and for the proportional hazards regression model, where D is the number of events. For a regression model with more than one covariate, the variance of for the normal regression and logistic regression models is inversely related to the variance of X1 after adjusting for other covariates. That is, where is the estimate of the parameter in the model and is the R square from the regression of on other covariates—that is, the proportion of the variance explained by these covariates. Similarly, for a proportional hazards regression model, Thus, with the derived maximum information, the required sample size or number of events can also be computed for the testing of a parameter in a regression model with covariates. #### Test for a Parameter in the Regression Model The MODEL=REG option in the SAMPLESIZE statement derives the sample size required for a Z test of a normal regression. For a normal linear regression model, the response variable is normally distributed with the mean equal to a linear function of the explanatory variables and the constant variance . The normal linear model is where is the vector of the N observed responses, is the design matrix for these N observations, is the parameter vector, and is the identity matrix. The least squares estimate is and is normally distributed with mean and variance For a model with only one covariate X1, where the variance Thus, with the derived maximum information , the required sample size is given by For a normal linear model with more than one covariate, the variance of a single parameter is where is the diagonal element of the matrix corresponding to the parameter , is the variance of the variable X1, and is the proportion of variance of X1 explained by other covariates. The value represents the variance of X1 after adjusting for all other covariates. Thus, with the derived maximum information , the required sample size is In the SEQDESIGN procedure, you can specify the MODEL=REG( VARIANCE= XVARIANCE= XRSQUARE=) option in the SAMPLESIZE statement to compute the required total sample size and individual sample size at each stage. A SAS procedure such as PROC REG can be used to compute the parameter estimate and its standard error at each stage. #### Test for a Parameter in the Logistic Regression Model The MODEL=LOGISTIC option in the SAMPLESIZE statement derives the sample size required for a Z test of a logistic regression parameter. The linear logistic model has the form where p is the response probability to be modeled and is a vector of parameters. Following the derivation in the section Test for a Parameter in the Regression Model, the required sample size for testing a parameter in is given by With the variance of the logit response, , where is the variance of X and is the proportion of variance explained by other covariates. In the SEQDESIGN procedure, you can specify the MODEL=LOGISTIC( PROP=p XVARIANCE= XRSQUARE=) option in the SAMPLESIZE statement to compute the required total sample size and individual sample size at each stage. A SAS procedure such as PROC LOGISTIC can be used to compute the parameter estimate and its standard error at each stage. #### Test for a Parameter in the Proportional Hazards Regression Model The MODEL=PHREG option in the SAMPLESIZE statement derives the number of events required for a Z test of a proportional hazards regression parameter. For analyses of survival data, Cox’s semiparametric model is often used to examine the effect of explanatory variables on hazard rates. The survival time of each observation in the population is assumed to follow its own hazard function, , expressed as where is an arbitrary and unspecified baseline hazard function, is the vector of explanatory variables for the ith individual, and is the vector of regression parameters associated with the explanatory variables. Hsieh and Lavori (2000, p. 553) show that the required number of events for testing a parameter in , , associated with the variable X1 is given by where is the variance of X1 and is the proportion of variance of X1 explained by other covariates. In the SEQDESIGN procedure, you can specify the MODEL=PHREG( XVARIANCE= XRSQUARE=) option in the SAMPLESIZE statement to compute the required number of events and individual number of events at each stage. A SAS procedure such as PROC PHREG can be used to compute the parameter estimate and its standard error at each stage. Note that for a two-sample test, X1 is an indicator variable and is the only covariate in the model. Thus, if the two sample sizes are equal, then the variance and the required number of events for testing the parameter is given by See the section Input Number of Events for Fixed-Sample Design for a detailed description of the sample size computation that uses hazard rates, accrual rate, and accrual time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649901390075684, "perplexity": 574.9340168289784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00441.warc.gz"}
https://www.physicsforums.com/threads/time-hollow-cylinder-spends-on-ramp.278385/
# Homework Help: Time hollow cylinder spends on ramp 1. Dec 9, 2008 ### cashmoney805 1. The problem statement, all variables and given/known data A hollow cylinder (hoop) is rolling on a horizontal surface at speed v= 3.3 m/s when it reaches a 15 degree incline. (a) How far up the incline will it go? (b) How long will it be on the incline before it arrives back at the bottom? 2. Relevant equations Energy conservation: 1/2 Iw^2 + 1/2mv^2 = mgh v^2 = vo^2 +2ax x=vot + .5at^2 v=vo+at 3. The attempt at a solution I for this problem is (I think) 1/2 m (r1^2 + r2^2), but the r's cancel because w^2 = v^2/(r1^2 + r2^2) So I believe the energy equation simplifies to .25v^2 + .5v^2 = gh I solve and get h = .833 and so the length = .833/sin15 = 3.22 However, the book says the length is 4.29. Also, after that I don't know where to go. For time, I tried v=vo+at, for a I used 9.8sin 15 and t (up) I got 1.3s However, the total t= 5.2 s Thank you so much! 2. Dec 9, 2008 ### Staff: Mentor Careful. Treat this as a thin hoop--with a single radius. (You're using a formula for a thick hollow cylinder. You'd have to set r1 = r2 to use that one.) You'll need to redo this, after fixing the above. Careful--gravity is not the only force acting. A simpler way would be to figure out the average speed as it goes up the ramp. 3. Dec 9, 2008 ### cashmoney805 What other force is acting on it? Ok, I treated the hoop as a thin one and now my equation is: v^2 = gh h = 1.1 m, so length is 1.1/sin15 = 4.29m woo! Now for t... I don't really know what to use anymore :/ 4. Dec 9, 2008 ### Staff: Mentor Friction. Good. You know the distance. What's the average speed up the incline? 5. Dec 9, 2008 ### cashmoney805 vo/2 Ah then do x/v = t. How do you know when to use average velocity though? I never seem to use that.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182597398757935, "perplexity": 2218.500236390205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863972.16/warc/CC-MAIN-20180521082806-20180521102806-00213.warc.gz"}
https://www.arxiv-vanity.com/papers/1007.0051/
# Vacuum Polarization on the Schwarzschild Metric with a Cosmic String Adrian C. Ottewill    Peter Taylor School of Mathematical Sciences and Complex & Adaptive Systems Laboratory, University College Dublin, Belfield, Dublin 4, Ireland July 25, 2021 ###### Abstract We consider the problem of the renormalization of the vacuum polarization in a symmetry space-time with axial but not spherical symmetry, Schwarzschild space-time threaded by an infinite straight cosmic string. Unlike previous calculations, our framework to compute the renormalized vacuum polarization does not rely on special properties of Legendre functions, but rather has been developed in a way that we expect to be applicable to Kerr space-time. ## I Introduction In this paper we calculate the vacuum polarization of a massless, minimally coupled scalar field in the region exterior to the horizon of a Schwarzschild black hole threaded by an infinite thin cosmic string which reduces the spherical symmetry to axial symmetry. We consider the scalar field in the Hartle-Hawking vacuum state, corresponding to a black hole of mass in (unstable) thermal equilibrium with a bath of blackbody radiation. There is an extensive body of work on renormalization of the vacuum polarization on black hole spacetimes (Candelas (1980); Candelas and Howard (1984); Anderson (1989, 1990); Winstanley and Young (2008)). In all of these cases, the authors have considered spherically symmetric black holes. The most astrophysically significant case, however, is the Kerr-Newman black hole. The calculation of in this case has proved elusive (with the exception of it’s calculation on the pole of the horizon where the effects of rotation are minimized Frolov (1982)). The principal reason for this is that the rotation of the black hole means that it no longer possesses spherical symmetry, but only axial symmetry. The existing calculations, referenced above, rely heavily on the Legendre Addition Theorem and other well known properties of the special functions, as well as the Watson-Sommerfeld formula. In the axial symmetric case, the Legendre Addition Theorem does not apply and the Watson-Sommerfeld formula is no longer useful as it is no longer possible to perform the sum over the azimuthal quantum number. In the case being considered in this paper, we are dealing with a black hole where the symmetry is reduced from spherical to axial but without the added complication of rotation. This fact makes this case an ideal precursor to the Kerr-Newman case. It presents the first calculation of the renormalized vacuum polarization on the exterior region of an axially symmetric black hole. It should be noted that DeBenedictis DeBenedictis (1999) has calculated the vacuum polarization on an axially symmetric metric, but in the case of a black string, not a black hole. Most importantly, the method presented in that paper is not applicable to the current space-time or to the Kerr-Newman case. The method we present here does not rely on specific properties of the angular functions (such as Addition Theorems) and that results in a complete mode-by-mode subtraction (as opposed to the partial mode-subtractions in the literature). Furthermore, we elucidate some important points about the Christensen-DeWitt point-splitting approach Christensen (1976) to renormalization. In particular, we show that the choice of point-separation direction is intimately connected to the order of summation of the mode-sum. ## Ii The Mode-Sum Expression for the Green’s Function The cosmic string is modeled by introducing an azimuthal deficit parameter, , into the standard Schwarzschild metric. We may describe this in coordinates , where is periodic with period , so we may take , in which the line element is given by ds2=−(1−2M/r)dt2+(1−2M/r)−1dr2 +r2dθ2+r2sin2θd~ϕ2. (1) For GUT scale cosmic strings where is the mass per unit length of the string so we shall assume . In Sec. V, we further limit ourselves to the case which is physically justifiable since . We can alternatively define a new azimuthal coordinate by ~ϕ=αϕ (2) so that is periodic with period and we may take . In coordinates , the line element is given by ds2=−(1−2M/r)dt2+(1−2M/r)−1dr2 +r2dθ2+α2r2sin2θdϕ2. (3) We shall consider a massless, minimally coupled scalar field, , in the Hartle-Hawking vacuum state. Since this is a thermal state, it is convenient to work with the Euclidean Green’s function, performing a Wick rotation of the temporal coordinate and eliminating the conical singularity at by making periodic with period where is the surface gravity of the black hole. The massless, minimally coupled scalar field, satisfies the homogenous wave-equation □φ(τ,r,θ,ϕ)=0, (4) which can be solved by a separation of variables by writing φ(τ,r,θ,ϕ)∼einκτ+imϕP(θ)R(r) (5) where is regular and satisfies {1sinθddθ(sinθddθ)−m2α2sin2θ+λ(λ+1)}P(θ)=0 (6) while satisfies {ddr(r2−2Mr)ddr−λ(λ+1)−n2κ2r4r2−2Mr}R(r)=0. (7) The term arises as the separation constant. The choice of is arbitrary for to satisfy the wave equation but requires a specific choice in order for the mode-function to satisfy the boundary conditions of regularity on the poles. In the Schwarzschild case (no cosmic string, ), regularity on the poles means that , i.e the separation constant is . In the cosmic string case, the appropriate choice of that guarantees regularity of the angular functions on the poles is λ=l−|m|+|m|/α. (8) With this choice of , the angular function is the Legendre function of both non-integer order and non-integer degree,viz., P(θ)=P−|m|/αλ(cosθ). (9) It can be shown that these angular functions satisfy the following normalization condition, ∫1−1 P−|m|/αl−|m|+|m|/α(cosθ)P−|m|/αl′−|m|+|m|/α(cosθ)d(cosθ) =2(2λ+1)Γ(λ−|m|/α+1)Γ(λ+|m|/α+1)δll′. (10) The periodicity of the Green’s function with respect to and with periodicity and , respectively, combined with Eq.(II) allow us to write the mode-sum expression for the Green’s function as G(x,x′)=T4π∞∑n=−∞einκ(τ−τ′)∞∑m=−∞eim(ϕ−ϕ′)∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)P−|m|/αλ(cosθ′)χnλ(r,r′). where satisfies the inhomogeneous equation, {ddr(r2−2Mr)ddr− λ(λ+1)−n2κ2r4r2−2Mr}χnλ(r,r′) =−1αδ(r−r′). (12) It is convenient to write the radial equation in terms of a new radial variable , the radial equation then reads {ddη((η2−1)ddη)− =−1αMδ(η−η′). (13) where we have used the fact that . For , the two solutions of the homogeneous equation are the Legendre functions of the first and second kind. For , the homogeneous equation cannot be solved in terms of known functions and must be solved numerically. We denote the two solutions that are regular on the horizon and infinity (or some outer boundary) by and , respectively. A near-horizon Frobenius analysis for shows that the indicial exponent is , and so we have the following asymptotic forms: pnλ(η) ∼(η−1)|n|/2 η→1 (14) qnλ(η) ∼(η−1)−|n|/2 η→1. Defining the normalizations by these asymptotic forms and using the Wronskian conditions one can obtain the appropriate normalization of the Green’s function: χnλ(η,η′)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩1αMPλ(η<)Qλ(η>)n=012|n|αMpnλ(η<)qnλ(η>)n≠0. (15) ## Iii Choosing a Separation Direction In order to renormalize , we subtract, in a meaningful way, the geometrical Christensen-DeWitt renormalization terms from the Green’s function. This approach rests on the fact that all Green’s functions will possess the same short distance behaviour encapsulated in the Hadamard parametrix. On the other hand the full Green’s function must reflect the relevant boundary conditions of the global problem, for example periodicity with particular period in and , and this is most easily expressed by a mode decomposition. In order to perform the renormalization it is first necessary to perform a regularization of the Green’s function and the natural regularization in the Christensen-DeWitt approach is to consider the Green’s function with the two points separated. The geometrical singularity may then be removed prior to bringing the two points together to give . The geometrical nature of the subtraction means that we obtain the same result whichever direction we separate in, although there is a surprising twist in the tale described below. As we are free to choose the direction of separation, we may do so in a way which makes the calculation as straightforward as possible. In almost all black hole calculations in the literature, e.g.Candelas and Howard (1984); B.P.Jensen and A.C.Ottewill (1989); Anderson (1989, 1990); Winstanley and Young (2008), the authors have preferred to separate in the temporal direction, except for on-horizon calculations. The principal reason for this is that the metric components do not depend on , making the renormalization terms somewhat easier. (The same could be said for separating in , however, making it an equally suitable candidate.) For on-horizon calculations, it is typically most convenient to separate in the radial direction (see Candelas (1980) for example). In fact, we have calculated elsewhere A.C.Ottewill and P.Taylor (arXiv:1006.4520 [math-ph]) an analytic expression for on the horizon of the Schwarzschild black hole threaded by an infinite cosmic string by separating radially and using a summation formula we have derived in that paper. Since we have calculated the vacuum polarization on the horizon in another paper, we shall concentrate here on the calculation of off-horizon values in this paper. The question we must address is what is the most convenient separation when we no longer have spherical symmetry. To answer this we start by analysing the approach taken in the literature in the spherically symmetric case. In this case, one can sum over to obtain expression for that involves an inner sum over -modes and an outer sum over -modes. One then converts the -sum into an integral using the Watson-Sommerfeld formula and sums the -modes directly. (This is, of course, only a sketch of the method and several other tricks and techniques are used to do the calculation, none of which are important to the choice of separation.) In the cosmic string case, the axial symmetry means that converting an -sum to an integral results in numerical integrals over the square of Legendre functions; this is neither convenient nor useful. The most practical way to proceed with the calculation is to convert the -sum to an integral and sum the , -modes directly, since the angular part of the Green’s function does not depend on the -modes. In fact, this proves to be a very fruitful approach to these calculations, both in the axially symmetric and the spherically symmetric case. There is a caveat, however, which relates to the order in which we perform the mode sums which is intimately related to the distributional nature of the expressions we are dealing with. At a 4-dimensional level we have a sum over a complete set of mode functions and the expression G(x,x′)=∑iui(x)ui(x′)λi is understood in the sense of smearing with smooth functions of compact support in the 4-dimensional space. In moving to a point separated expression with just one coordinate different we must consider the convergent limits in three of our coordinate directions. In particular as and are commuting Killing vectors we may associate with them independent quantum numbers and . Also correspondingly, when we derive our Green’s function, it matters not whether we separate out the dependence or the dependence first. However, when we limit our test functions to 3-dimensional delta functions the corresponding sum must remain until last, i.e., the outer sum must be that which corresponds to the direction in which we have separated. More specifically, separating in the temporal direction corresponds to an inner , -sum and an outer -sum, separating in the azimuthal direction corresponds to an inner -sum and an outer , -sum. We have two strong argumets in support of the above argumet. Firstly, one can show analytically, in the Schwarzschild case, that the finite difference between summing in the different orders is precisely the analytic difference between the regular parts of the Christensen subtraction terms for temporal and azimuthal separation. In other words, separating in the temporal direction and doing the -sum first gives a finite but wrong answer! An alternative way of understand this is that the renormalization procedure adopted by Candelas-Howard and the alternative procedure of this paper results in a double mode-sum that is convergent, but not absolutely convergent. As a result the order of summation matters. The second supporting argument is that we have an unambiguous answer on the horizon in both the Schwarzschild black hole with a cosmic string A.C.Ottewill and P.Taylor (arXiv:1006.4520 [math-ph]) and without a string Candelas (1980). Clearly the correct off-horizon answer must match up with the horizon value as the horizon is approached and this is only true when we sum according to the rules laid out above. It is natural for us to separate in the azimuthal direction for the cosmic string case and so in light of the previous arguments, the appropriate mode-sum form for our unrenormalized Green’s function is G(r,θ,Δϕ)=T4πα∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2∞∑n=−∞χnλ(η,η), (16) taking . ## Iv Renormalization For a massless scalar field in a Ricci-flat space-time, the only Christensen-DeWitt subtraction term required for the calculation of is Gdiv(x,x′)=18π2σ(x,x′), (17) where is the square of the geodesic distance between and Christensen (1976). For an azimuthal splitting this becomes Gdiv(r,θ,Δϕ)=14π2[1gϕϕΔϕ2+gabΓaϕϕΓbϕϕ12g2ϕϕ+O(Δϕ)] =14π2M2[1(η+1)2sin2θα2Δϕ2+ 1121(η+1)2sin2θ−161(η+1)3+O(Δϕ)]. (18) One is now faced with the challenge of subtracting this geometrical expression from the mode sum Eq.(16) in such a way that the limit may be performed. The approach in the literature, e.g. Candelas and Howard (1984); Anderson (1989, 1990); Winstanley and Young (2008), is to bring the divergent term inside the outer sum (in this case the -sum, in the temporal splitting case the -sum) using an identity from distribution theory, 1Δϕ2=−∞∑m=1meimΔϕ−112+O(Δϕ)2 (19) or its equivalent expression for temporal separation. The success of this approach in the spherically symmetric case relies heavily on the applicability of the Legendre Addition Theorem and the Watson-Sommerfeld formula. This is not possible in the general case. Instead we would like to be able to write the divergent term in as a triple mode-sum over , , so that a full mode-by-mode subtraction may be performed. It turns out that there is a natural and general way to approach this problem. There are a whole gamut of summation formulae that can be derived simply by equating different but equivalent expressions for the same Green’s function. This requires little knowledge of the special functions being summed, only that they are solutions to the homogeneous wave equation being considered. Indeed, this approach is one of the standard ways of proving the Legendre Addition Theorem. The universality of the geometrical singularity structure provided by the Hadamard form then ensures, that with appropriate parameterisation, we can match the required coordinate divergence to an appropriate mode sum. Ideally, we equate a mode-sum expression for a Green’s function to a closed-form or quasi-closed form expression. The most effective way of deriving such summation formulae is by considering appropriate Green’s functions on Minkowski spacetime, where the Green’s function is usually known in closed form or quasi-closed form. In the next section, we will derive the appropriate summation formula for the current problem by considering the thermal Green’s function on Minkowski spacetime threaded by an infinite cosmic string. ## V Thermal Green’s Function on Minkowski Spacetime with a Cosmic String We consider a massless scalar field at non-zero temperature propagating in Minkowski spacetime with an infinite cosmic string running along the polar axis. The Euclideanized metric is given by ds2=dτ2+dξ2+ξ2dθ2+α2ξ2sin2θdϕ2, (20) where and are the usual polar coordinates on the 2-sphere and is an arbitrary radial variable. The periodic (thermal) Euclidean Green’s function can be written as an image sum over the zero temperature Green’s function, as Gβ(Δτ,Δx)=∞∑k=−∞G(Δτ+kβ,Δx) (21) where is the inverse temperature. A mode-sum expression for the scalar field Green’s function at zero temperature is easily found to be G(Δτ,Δx) =18π2α∫∞0eiωΔτdω∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1) P−|m|/αλ(cosθ)P−|m|/αλ(cosθ′)1(ξξ′)1/2Iλ+1/2(ωξ<)Kλ+1/2(ωξ>) (22) where and are the modified Bessel functions Gradshteyn and Ryzhik (2000) of the first and second kind, respectively. Now, using the Fourier transform of a Comb function, ∞∑k=−∞eiωkβ=2πβ∞∑n=−∞δ(ω−n2π/β), (23) we arrive at an appropriate expression for the thermal Green’s function: Gβ(x,x′) =T4πα∞∑n=−∞einκΔτ∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1) P−|m|/αλ(cosθ)P−|m|/αλ(cosθ′)1(ξξ′)1/2Iλ+1/2(nκξ<)Kλ+1/2(nκξ>) (24) where , and the term is understood in the sense limn→0(ξξ′)−1/2Iλ+1/2(nκξ<)Kλ+1/2(nκξ>)=(ξ<ξ>)λ1ξ>(2λ+1). (25) In particular, for azimuthal separation, we have (taking into account that we must choose the appropriate order of summation) Gβ(r,θ,Δϕ) =T4πα∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1) P−|m|/αλ(cosθ)2∞∑n=−∞1ξIλ+1/2(nκξ)Kλ+1/2(nκξ) (26) Note that, with the exception of the radial functions, this is completely equivalent to the Green’s function Eq (16). We have an equivalent expression for this Green’s function given by Linet Linet (1992), valid for , in which the author writes the Green’s function as the singular part plus a regular integral part, , The integral part is given by Gint(x,x′)=−f(Δϕ+π/α)+f(Δϕ−π/α) (27) where f(Ψ)=Tsin(Ψ)8π2α∫∞0sinh(κR(u))R(u)[cosh(κR(u))−cosκΔτ]1(cosh(u/α)−cos(Ψ))du (28) and R(u)=[ξ2+ξ′2−2ξξ′cosθcosθ′+2ξξ′sinθsinθ′coshu]1/2. (29) The singular part is Gsing(x,x′)=T4πsinhκρρ[coshκρ−cosκΔτ] (30) where In particular, for azimuthal point separation, we have Gβ(ξ,θ,Δϕ)=14π2ξ2α2sin2θΔϕ2+148π2ξ2sin2θ+T212+Gint(ξ,θ,Δϕ)+O(Δϕ)2. (31) Finally, equating Eqs. (V) and (31), we arrive at the very useful identity 14π21α2sin2θΔϕ2 =T4πα{∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2 ∞∑n=−∞ξIλ+1/2(nκξ)Kλ+1/2(nκξ)}−148π2sin2θ−T2ξ212−ξ2Gint(ξ,θ,Δϕ)+O(Δϕ)2. (32) It is important to emphasize that this equation is true for any and for all . For our purposes, the obvious choice for is the temperature of the Schwarzschild black hole so that is now the Schwarzschild surface gravity . In the next section we show that there is a prescription to assign in terms of the Schwarzschild radial variable in such a way as to guarantee the convergence of the mode-sum. ## Vi Mode By Mode Subtraction We now return to the Schwarzschild cosmic string case. From Eq. (16) and Eq. (IV), we have the following expression for the renormalized vacuum polarization: ⟨^φ2⟩ren =limΔϕ→0[G(r,θ,Δϕ)−Gdiv(r,θ,Δϕ)] =limΔϕ→0[T4πα∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2∞∑n=−∞χnλ(η,η) −14π2M21(η+1)2α2Δϕ2sin2θ−148π2M21(η+1)2sin2θ+148π2M22(η+1)3+O(Δϕ2)] (33) Now our identity, Eq.(V), is of precisely the correct form to allow us to convert the term into an appropriate triple mode-sum. On dividing Eq. (V) by and substituting into Eq. (VI), we obtain ⟨^φ2⟩ren=limΔϕ→0[T4πα∞∑m=−∞eimΔϕ∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2 ∞∑n=−∞{χnλ(r,r)−ξM2(η+1)2Iλ+1/2(nκξ)Kλ+1/2(nκξ)} +T212ξ2M2(η+1)2+148π2M22(η+1)3+ξ2M2(η+1)2Gint(ξ,θ,Δϕ)+O(Δϕ2)]. (34) We can now take the limit inside the sum to get ⟨^φ2⟩ren=⟨^φ2⟩sum+⟨^φ2⟩int+⟨^φ2⟩analytic, (35) where ⟨^φ2⟩sum=T2πα∞∑m=−∞∞∑l=|m|(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2 (36) ⟨^φ2⟩int=−Tsin(π/α)4π2M2αξ2(η+1)2∫∞0sinh(κR(u))R(u)(cosh(κR(u))−1)(cosh(u/α)−cos(π/α))du (37) ⟨^φ2⟩analytic=T212ξ2M2(η+1)2+148π2M22(η+1)3 (38) where . We have used Eqs. (27) and (28) to obtain the expression for . We have also explicitly separated out the term. A key feature of our approach is that since we have a triple mode-by-mode subtraction, with the correct choice of , we can ensure the convergence of the triple sum, thus making redundant the removing of the ’superficial’ divergence discussed by some authors (Winstanley and Young (2008); Anderson (1989)). Specifically, we choose , such that, for a given , the , mode-sum is regular. We can do this by associating the , mode-sum with a particular 3D Green’s function by a technique known as dimensional reduction Winstanley and Young (2008), and then we examine the Hadamard singularity structure of this Green’s function. In the Appendix, we show that the condition of regularity of the , sum can be achieved by taking to be ξ=M(η+1)2(η2−1)1/2. (39) The mode-sum now converges for this choice of , as we discuss in detail in the following section. ## Vii WKB Approximations In order to analyze the convergence of the mode-sum of Eq. (VI), we consider the WKB approximations of both the radial part of the Green’s function and the subtraction terms. We have adapted the WKB prescription given by Howard Candelas and Howard (1984); Winstanley and Young (2008), valid for large and . Employing a second order WKB approximation, we have 12|n|Mpnλ(η)qnλ(η)∼β(0)nλ+β(1)nλ+β(2)nλ (40) where , and are the zeroth, first and second order approximants respectively. Each term is written in successive powers of , where Ψnλ=(((λ+1/2)2(η2−1)+ω2n)1/2 (41) and . In terms of , the approximants are β(0)nλ =12MΨnλ,β(1)nλ=116MΨ3nλ−ω2n8MΨ5nλ(2η2−6η+7)+5ω4n16MΨ7nλ(η−2)2, β(2)nλ =11+16η2256MΨ5nλ+ω2n64MΨ7nλ(−171+70η−88η2+60η3−16η2)+7ω4n128MΨ9nλ(666−1020η+773η2−320η3+56η4) −231ω6n64MΨ11nλ(η−2)2(7−6η+2η2)+1155ω8n256MΨ13nλ(η−2)4. (42) We also require the WKB approximation to the subtraction term, ξM2(η+1)2Iλ+1/2(nκξ)Kλ+1/2(nκξ)∼γ(0)nλ+γ(1)nλ+γ(2)nλ (43) where γ(0)nλ =12MΨnλ,γ(1)nλ=−14Mω2nΨ5nλ(η2−1)+516Mω4nΨ7nλ(η2−1), γ(2)nλ =−14Mω2nΨ7nλ(η2−1)2+4916Mω4nΨ9nλ(η2−1)2−23132Mω6nΨ11nλ(η2−1)2+1155256Mω8nΨ13nλ(η2−1)2. (44) Immediately, we see that the zeroth order terms are equal and therefore the slowest order term in the mode-sum is proportional to . From Eq.(41), this implies that for large and , the summand is , since the angular functions scale linearly with . Thus, it is now clear from the cancellation of the zeroth order approximations that the mode-sum converges. This proof of convergence is considerably simpler than analagous proofs in the standard approach (e.g., Candelas and Howard (1984); Winstanley and Young (2008) ). However, though we have shown that the mode-sum converges, the convergence is extremely slow (as is usually the case with such calculations) and is not absolute as discussed in Sec. III. A standard trick for speeding the convergence in order to make the mode-sum calculation amenable is to subtract and add the second order WKB approximations Candelas and Howard (1984). For the terms, the subtraction term is exactly the zeroth order approximant to the radial Green’s function and so we need only subtract and add and terms in this case. We now have the following expression for the mode-sum ⟨^φ2⟩sum=T2πα(Σ+Λ) (45) where is the slowly converging part of the mode-sum. We further write Σ=Σ1−Σ2+1/2Σ3 (46) where Σ1 =∞∑n=1∞∑l=0l∑m=−l(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2{pnλ(η)qnλ(η)2|n|M−β(0)nλ−β(1)nλ−β(2)nλ} Σ2 =∞∑n=1∞∑l=0l∑m=−l(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2{ξIλ+1/2(nκξ)Kλ+1/2(nκξ)M2(η+1)2−γ(0)nλ−γ(1)nλ−γ(2)nλ} Σ3 =∞∑l=0l∑m=−l(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2{Pλ(η)Qλ(η)M−β(0)0λ−β(1)0λ−β(2)0λ}. (47) We have re-written the , sum and swapped the order of summation with the -sum which presents no problem for these sums since they are all rapidly and absolutely convergent. In fact, the summand of and are for large and . The summand of is for large . The slowly convergent term, , is given by Λ =∞∑l=0l∑m=−l(2λ+1)Γ(λ+|m|/α+1)Γ(λ−|m|/α+1)P−|m|/αλ(cosθ)2{β(1)0λ2+β(2)0λ2+∞∑n=1(β(1)nλ−γ(1)nλ+β(2)nλ−γ(2)nλ)}. In order to speed the convergence of this sum, we can convert the -sum here to an integral using a modified version of the Plana-Abel Sum Formula. ## Viii Modified Plana-Abel Sum Formula We begin by re-writing the difference of the WKB approximants of order () as M.Casals et al. (2009) β(i)nλ−γ(i)nλ=2i∑j=042i+1Cij(η)(η+1)4i+2n2j[Ω2+n2]i+j+1/2 (49) where is given by Ω=4(λ+1/2)(η2−1)1/2(η+1)2. (50) and the coefficients are tabulated in Table 1. We now convert the -sum above to an integral using a modification of the Plana-Abel sum formula Fialkovsky (2008); M.Casals et al. (2009). In our particular case, we have ∞∑n=1n2j[Ω2+n2]i+j+1/2=−δj02Ω2i+1+∫∞0n2j[Ω2+n2]i+j+1/2dn+2(−1)i+j√πΓ(i+j+1/2)∫∞Ωh(i+j)(s)(s−Ω)1/2ds (51) where h(s)=(seiπ/2)2j(s+Ω)i+j+1/21e2πs−1. (52) Furthermore, the -integration of Eq.(51) can be done explicitly using ∫∞0n2j[Ω2+n2]i+j+1/2dn=1Ω2iΓ(i)Γ(j+1/2)2Γ(i+j+1/2). (53) Substituting Eq.(53) into Eq.(51), we get ∞∑n=1(β(i)nλ−γ(i)nλ)=−42i+1Ci0(η)2(η+1)4i+2Ω2i+1+42i
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361171722412109, "perplexity": 594.6347167638236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00146.warc.gz"}
https://edurev.in/studytube/Wave-Class-11/6951a03a-512b-4b64-becc-82e9bb350e50_t
Courses # Wave - Class 11 Class 11 Notes | EduRev ## Class 11 : Wave - Class 11 Class 11 Notes | EduRev The document Wave - Class 11 Class 11 Notes | EduRev is a part of Class 11 category. All you need of Class 11 at this link: Class 11 what is wave? Ref: https://edurev.in/question/734256/what-is-wave-Related-Waves-Transverse-and-Longitu Wave A wave is a vibratory disturbance in a medium which carries energy from one point to another point without any actual movement of the medium. There are three types of waves 1. Mechanical Waves Those waves which require a material medium for their propagation, are called mechanical waves, e.g., sound waves, water waves etc. 2. Electromagnetic Waves Those waves which do not require a material medium for their propagation, are called electromagnetic waves, e.g., light waves, radio waves etc. 3. Matter Waves These waves are commonly used in modern technology but they are unfamiliar to us. Thses waves are associated with electrons, protons and other fundamental particles. Nature of Waves (i) Transverse waves A wave in which the particles of the medium vibrate at right angles to the direction of propagation of wave, is called a transverse wave. These waves travel in the form of crests and troughs. (ii) Longitudinal waves A wave in which the particles of the medium vibrate in the same direction in which wave is propagating, is called a longitudinal wave. These waves travel in the form of compressions and rarefactions. Some Important Terms of Wave Motion 1. (i) Wavelength The distance between two nearest points in a wave which are in the same phase of vibration is called the wavelength (λ). 2. (ii) Time Period Time taken to complete one vibration is called time period (T). 3. (iii) Frequency The number of vibrations completed in one second is called frequency of the wave. Its SI unit is hertz. 4. (iv) Velocity of Wave or Wave Velocity The distance travelled by a wave in one second is called velocity of the wave (u). Relation among velocity, frequency and wavelength of a wave is given by v = fλ. 5. (v) Particle Velocity The velocity of the particles executing SHM is called particle velocity. Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity! , , , , , , , , , , , , , , , , , , , , , ;
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862248063087463, "perplexity": 1185.9510314821098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00079.warc.gz"}
http://tatome.de/zettelkasten/zettelkasten.php?standalone&reference=weisswange-et-al-2011
# Show Reference: "Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning" Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning PLoS ONE, Vol. 6, No. 7. (5 July 2011), e21575, doi:10.1371/journal.pone.0021575 by Thomas H. Weisswange, Constantin A. Rothkopf, Tobias Rodemann, Jochen Triesch @article{weisswange-et-al-2011, abstract = {Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.}, author = {Weisswange, Thomas H. and Rothkopf, Constantin A. and Rodemann, Tobias and Triesch, Jochen}, day = {5}, doi = {10.1371/journal.pone.0021575}, journal = {PLoS ONE}, month = jul, number = {7}, pages = {e21575+}, posted-at = {2011-08-18 09:25:07}, priority = {2}, publisher = {Public Library of Science}, title = {Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning}, url = {http://dx.doi.org/10.1371/journal.pone.0021575}, volume = {6}, year = {2011} } See the CiteULike entry for more info, PDF links, BibTex etc. The theoretical accounts of multi-sensory integration due to Beck et al. and Ma et al. do not learn and leave little room for learning. Thus, they fail to explain an important aspect of multi-sensory integration in humans. Weisswange et al.'s model does not reproduce population coding. Reward mediated learning has been demonstrated in adaptation of orienting behavior. Possible neurological correlates of reward-mediated learning have been found. Reward-mediated is said to be biologically plausible. Weisswange et al's model uses a softmax function to normalize the output. Weisswange et al. distinguish between two strategies for Bayesian multisensory integration: model averaging and model selection. The model averaging strategy computes the posterior probability for the position of the signal source, taking into account the possibility that the stimuli had the same source and the possibility that they had two distinct sources. The model selection strategy computes the most likely of these two possibilities. This has been called causal inference. Weisswange et al. model learning of multisensory integration using reward-mediated / reward-dependent learning in an ANN, a form of reinforcement learning. They model a situation similar to the experiments due to Neil et al. and Körding et al. in which a learner is presented with visual, auditory, or audio-visual stimuli. In each trial, the learner is given reward depending on the accuracy of its response. In an experiment where stimuli could be caused by the same or different sources, Weisswange found that their model behaves similar to both model averaging or model selection, although slightly more similar to the former. Weisswange et al. apply the idea of Bayesian inference to multi-modal integration and action selection. They show that online reinforcement learning can effectively train a neural network to approximate a Q-function predicting the reward in a multi-modal cue integration task.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697450160980225, "perplexity": 2904.3912206624423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156270.42/warc/CC-MAIN-20180919180955-20180919200955-00076.warc.gz"}
https://earthscience.stackexchange.com/questions/13575/changes-in-earths-orbital-and-rotation-speeds
# Changes in Earth's orbital and rotation speeds What are the Earth's maximum and minimum orbital speeds around the Sun? Is there any way to calculate orbital speed at any point of Earth's orbit? Is there a way to relate this to the calendar? Does the change in speed during an orbit around the Sun affect Earth's revolution time around its own axis? (24 hours slightly more or less on day to day basis) • Just to clarify: when you write "earth's speed of rotation", I assume you mean the speed at which Earth is orbiting the sun, rather than the speed at which it rotates around its own axis. Is this correct? – Pont Mar 9 '18 at 17:26 • As far as I know rotation speed is fixed, but is getting slowed year by year. – Santiago Mar 9 '18 at 18:04 • Now that I'm sure what this is about, I've reworded this for readability and to use standard terminology. But feel free to revert the edit if you think I've changed the meaning. (Cc @Pont) – Spencer Mar 10 '18 at 15:22 • @Spencer, I ended up changing my mind, but I am nervous that you lose parts of the question that PSA intended (like the table), and by making it sharper technically, you may lose understanding from more people at a novice level. The oddest part is how you suggest you suddenly have gained new insight, despite the fact PSA has made no comments/changes. I'm very hesitant to make edits to questions/answers, as others like you put in the time, but do very much hope not to lose the intentions of the user's question. :-) – JeopardyTempest Mar 10 '18 at 20:07 The Earth moves faster around the Sun when it is near its perihelion (the closest point of its orbit to the Sun). And it moves slower when it is further away (aphelion), just as Kepler realized quite a while ago when enunciating his Third Law of Planetary Motion. There are many ways to write a formula to calculate Earth's speed around the Sun. But for your question I think this one is simple and general enough: $$v= \sqrt{G\,M_S\left(\frac{2}{r}-\frac{1}{a}\right)}$$ Where $v$ is Earth's speed, $G$ is the gravitational constant, $M_s$ is the mass of the Sun (this equation assumes that the mass of Earth is negligible compared to the mass of the Sun, which is a very good approximation), $a$ is Earth's semi-major axis of orbit (which defines how elliptic or how circular the orbit is), and $r$ is the distance from the Earth to the Sun. And the closest and furthest distances respectively are about 147.1 million kilometers (around January 3) and about 152.1 million kilometers (around July 4). Plugging those numbers into the equation you get a maximum speed of 30.3 km/s and a minimum of 29.3 km/s. In contrast, the speed due to Earth's rotation around its own axis, is at most (i.e. in the equator) about 0.5 km/s. For any given day, the calculation of the distance to the Sun is complicated, but websites like this one, provide a real time value you can use in the above formula [that site also provides the speed right away!]. These values are basically the same from one year to another. However, they do slowly change as the eccentricity of Earth's orbit gradually varies over a ~400,000 years cycle as described by the Milankovitch cycles. Now, for Earth's speed of rotation about its own axis the value is pretty much the same each day; variations are of day length are on the order of a fraction of a millisecond. Part of that variation is periodic and more or less predictable and another part is from other less predictable factors. But for all practical purposes the length of the day is constant through the year. The following figure shows those tiny variations in day length since the early 1960's: (image from Wikipedia commons, originally here) But as a consequence of these tiny variations, almost every year one or two (leap seconds) are added to our clocks to keep them aligned with the rotation of the Earth. In the long run (millennia), Earth days are getting longer (~0.017 milliseconds per year). Some of the factors involved were discussed in this question. Finally to address whether changes in orbital speed affect rotational speed: I don't think so. For what I understand none of those small changes in the length of the day is a direct consequence of changes in the orbital speed. However, despite there is no changes in the speed of rotation, what we generally understand as a day (i.e. the time between two consecutive sunrises, sunsets or noon), is not only controlled by the rotation of the Earth around its axis. In fact, one out of the ~365 days of the year is consequence of Earth's revolution around the Sun, not its rotation. And that one-day contribution give rise to the difference between a day and a sidereal day, and the variations in the duration of the apparent solar day, variations that are directly controlled by Earth's orbital speed and can change the time between solar noon up to about 30 seconds over a year. • Nice answer (like all your answers). I wonder if OP's sub-question about changes in Earth's "revolution time" might be asking about variations in the length of the apparent solar day, in which case, of course, the orbital speed does have an effect. Might be worth mentioning in any case. – Pont Mar 9 '18 at 21:51 • @Pont You are right, that would be a different thing. I add a sentence pointing that out. – Camilo Rada Mar 9 '18 at 21:53 • @Pont Done, please let me know if that addition covers the case you did meant to point out. – Camilo Rada Mar 9 '18 at 22:12 • Yes, that's exactly what I meant. – Pont Mar 9 '18 at 22:36 • Aphelion and perihelion are swapped in the answer. – Erin Anne Mar 10 '18 at 2:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625159859657288, "perplexity": 452.4768091911836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00354.warc.gz"}
https://www.physicsforums.com/threads/basic-conservation-of-energy.715077/
# Basic Conservation of Energy 1. Oct 7, 2013 ### anthonych414 1. The problem statement, all variables and given/known data A 2-kg block is attached to a horizonal ideal spring with a spring constant of 200N/m. When the spring has its equilibrium length the block is given a speed of 5m/s. What is the maximum elongation of the spring. 2. Relevant equations Conservation of mechanical energy, PEspring=0.5kx^2 KE=0.5mv^2 3. The attempt at a solution It seems like a fairly simple question to me, applying conservation of mechanical energy at equilibrium and at maximum elongation we get 0 + 0.5kx^2 = 0 + 0.5mv^2, so x =√(mv^2/k) = 0.5 meters, however the solution says the answer should be 5 meters, can anyone tell me what I'm missing here? 2. Oct 7, 2013 ### Staff: Mentor Your result looks good. Maybe a typo in the question? Perhaps they meant for the initial velocity to be 50 m/s.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912655234336853, "perplexity": 701.3215305033045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647768.45/warc/CC-MAIN-20180322034041-20180322054041-00105.warc.gz"}
https://www.mathematik.rwth-aachen.de/go/id/ckus/lidx/1/file/564979
# Embedding theorems for decomposition spaces with applications to wavelet coorbit spaces • Einbettungssätze für Dekompositions-Räume mit Anwendungen auf Wavelet-Coorbit-Räume Voigtlaender, Felix; Führ, Hartmut (Thesis advisor); Feichtinger, Hans G. (Thesis advisor); Rauhut, Holger (Thesis advisor) Aachen : Publikationsserver der RWTH Aachen University (2016) Dissertation / PhD Thesis Aachen, Techn. Hochsch., Diss., 2015 Abstract The main topic of this thesis is the development of criteria for the (non)-existence of embeddings between decomposition spaces.A decomposition space is defined in terms of- a covering $\mathcal{Q}=(Q_{i})_{i\in I}$ of (a subset) of the frequency space $\mathbb{R}^{d}$,- an integrability exponent $p$ and- a certain discrete sequence space $Y$ on the index set $I$.The decomposition space norm of a distribution $f$ is then computed by decomposing the frequency content of $f$ according to the covering $\mathcal{Q}$, using a suitable partition of unity. Each of the localized pieces is measured in the Lebesgue space $L^{p}$ and the contributions of the individual pieces are aggregated using the discrete sequence space norm $\Vert \cdot\Vert_{Y}$. Given two decomposition spaces, it is of interest to know whether there is an embedding between these two spaces. Since both decomposition spaces are defined only in terms of the respective coverings, weights and discrete sequence spaces, it should be possible to decide the existence of the embedding only based on these quantities. Our findings will show that this is not only possible, but that the resulting criteria only involve discrete combinatorial considerations. In particular, no knowledge of Fourier analysis is needed for the application of these criteria. Finally, our results completely characterize the existence of the desired embedding under mild assumptions on the two coverings and sequence spaces. We apply our findings to a large number of concrete examples. Among others, we consider embeddings between- $\alpha$ -modulation spaces,- homogeneous and inhomogeneous Besov spaces and- shearlet-type coorbit spaces.In all cases, the known results for embeddings between these spaces turn out to be special cases of our criteria; in some cases, our new approach even yields stronger results than those previously known.For the discussion of shearlet-type coorbit spaces, we employ the second main result of this thesis which shows that the Fourier transform induces a natural isomorphism between a large class of wavelet coorbit spaces and certain decomposition spaces. This further emphasizes the scope of our embedding results for decomposition spaces.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199976325035095, "perplexity": 641.2446622103035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00203.warc.gz"}
http://mathhelpforum.com/algebra/37089-prove-b-b.html
# Math Help - prove A∩B = B∩A 1. ## prove A∩B = B∩A Let the sets R,S be defined as following finite sets R= {xÎZ+ / x is divisible by 2 but less than 20} S= {yÎ Z+ / y is divisible by 3 but less than 30} For the sets R and S prove the following set identities i)A∩B = B∩A ii)A—B ≠ B—A 2. Hi Is $R=\{ y\in\mathbb{N} \,|\, y\text{ divisible by }2\text{ and }y\leq20\}$ ? Originally Posted by naseerhaider i)A∩B = B∩A This is a general result : $A\cap B$ is the set which contains all the elements which lie in both $A$ and $B$. What does it mean for $B\cap A$ ? ii)A—B ≠ B—A To show that two sets are not equal, you only need to find an element which is in only one of the two sets. Can you find an element of $R-S$ which is not in $S-R$ ? (to help you, you can write down the elements of $R$, of $S$, of $R-S$ and of $S-R$) Hope that helps 3. Originally Posted by naseerhaider Let the sets R,S be defined as following finite sets R= {xÎZ+ / x is divisible by 2 but less than 20} S= {yÎ Z+ / y is divisible by 3 but less than 30} For the sets R and S prove the following set identities i)A∩B = B∩A ii)A—B ≠ B—A A = {2.4.6.8.10.12.14.16.18} B={3.6.9.12.15.18} intersection symbol is not there so just i say... left/right hand side -LHS/RHS in (i) LHS={6.12.18}RHS={6.12.18} that is the first one is equal. A-B= {2,3,4,8,9,10,14,15,16} B-A= {3,9,15} therefore the second one also proved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306280016899109, "perplexity": 751.6531879908179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00026-ip-10-71-132-137.ec2.internal.warc.gz"}
https://hsm.stackexchange.com/questions/11235/who-started-calling-the-matrix-multiplication-multiplication
# Who started calling the matrix multiplication "multiplication"? As I searched for linear algebra, I found it odd that the linear map composition corresponds to the multiplication of matrices. Considering the intuition that the repetition of addition is multiplication, and the definition of a module, I thought it was better to call the scalar multiplication matrix multiplication. Of course, the scalar multiplication can be defined for any field, and people would have liked to call the operation between matrices "matrix multiplication". But I feel that matrix composition is a more logical term. All I'd like to ask is about history. Was the multiplication of matrices "multiplication" from the beginning? The same person who introduced it, Cayley. Sylvester first used the term "matrix" (womb in Latin) for an array of numbers in 1848, but did not do much with it. Cayley started developing matrix algebra in 1855 and summarized his theory in A Memoir on the Theory of Matrices (1858). In the opening paragraphs he writes: "It will be, seen that matrices (attending only to those of the same order) comport themselves as single quantities; they may be added, multiplied or compounded together, &c.: the law of the addition of matrices is precisely similar to that for the addition of ordinary algebraical quantities; as regards their multiplication (or composition), there is the peculiarity that matrices are not in general convertible; it is nevertheless possible to form the powers (positive or negative, integral or fractional) of a matrix, and thence to arrive at the notion of a rational and integral function, or generally of any algebraical function of a matrix". Later, he first defines addition and multiplication by scalars, and then, symbolically introduces what is now called "row-column rule": "...as the rule for the multiplication or composition of two matrices. It is to be observed, that the operation is not a commutative one; the component matrices may be distinguished as the first or further component matrix, and the second or nearer component matrix, and the rule of composition is as follows, viz. any line of the compound matrix is obtained by combining the corresponding line of the first or further component matrix successively with the several columns of the second or nearer compound matrix." "Composition" and "multiplication" are used interchangeably throughout the paper. He goes on to define "inverse or reciprocal", and prove what is now called the Cayley-Hamilton theorem. Cayley's use of "multiplication" should not be too surprising. Algebraic systems alternative to the usual arithmetic were actively developed at the time, and the words "addition" and "multiplication" came to be used abstractly. This is exactly Cayley's context in defining his algebra of matrices and discussing associative and other laws by analogy to the arithmetical ones. Hamilton did it with quaternions in 1843, although part of his motivation was composing rotations. Boole in his Laws of Thought (1854) did not hesitate to call logical conjunction "multiplication", explaining "it is not affirmed that the process of multiplication in Algebra, of which the fundamental law is expressed by the equation $$xy=yx$$, possesses in itself any analogy with that process of logical combination which $$xy$$ has been made to represent above; but only that if the arithmetical and the logical process are expressed in the same manner, their symbolical expressions will be subject to the same formal law. The evidence of that subjection is in the two cases quite distinct". • Calling logical conjunction “multiplication” is straightforward indeed – the booleans with $(\vee)$ form after all a commutative semigroup, just like the integers do with $(\cdot)$. However, matrix multiplication is merely a semigroupoid operation, that's quite a different category (pun not intended). Dec 15, 2019 at 1:09 • @leftaroundabout Cayley originally introduces it only for square matrices, and discusses algebraic laws in that context. It is extended to rectangular ones later in the paper. And conjunction is idempotent and applies to things with no connection to numbers, so in other ways more dissimilar than matrix multiplication. Dec 15, 2019 at 1:25 • No. To the body question, it is well known that the product of matrices was originally called combination of "substitutions" (Gauß 1801), then resultant of "systems" (Cauchy 1815). Apr 29, 2020 at 14:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941451370716095, "perplexity": 661.9244307942687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00011.warc.gz"}
https://iwaponline.com/hr/article/14/1/19/1492/Predictive-Aggregation-Models-in-Hydrology
A prediction model capable of aggregation with recursive unbiased minimum variance estimation algorithms based on the Kalman filter technique has been formulated and applied for predicting monthly flows such that their summation is equal to annual flow in the same year. The model represents a discrete linear stochastic system where the states are defined as monthly flows in addition to the measurement equation with time invariant measurement matrix and annual flow measurement. Provided that observed or generated annual flows are available then the proposed model can be employed to predict monthly flows so that their aggregation yields the total annual flow. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414615631103516, "perplexity": 624.1994867198559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00075.warc.gz"}
http://math.stackexchange.com/questions/200974/why-for-all-open-interval-in-mathbbr1-it-must-contains-both-rationals-and?answertab=active
# why for all open interval in $\mathbb{R^1}$ it must contains both rationals and irrationals numbers As the topic, why for all open interval in $\mathbb{R^1}$ it must contains both rationals and irrationals numbers, exist any prove? - This must be proved in two stages. First, given an open interval $(a,b)$, we must show that it contains some rational number $q$, then we must show that it contains some irrational number $r$. @AsinglePANCAKE's answer already has a link to the first stage. So consider this done. That is, there is $q\in{\Bbb Q\cap(a,b)}$. Now, if $b\in{\Bbb Q}$, then let $r=q+(b-q)/\sqrt 2$, which is both irrational and inside $(a,b)$. Alternatively, if $b$ is irrational, then let $r=(q+b)/2$, which is irrational and inside $(a,b)$. So we always have rational $q$ and irrational $r$ inside $(a,b)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946889877319336, "perplexity": 78.3578424413721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657588.53/warc/CC-MAIN-20150417045737-00044-ip-10-235-10-82.ec2.internal.warc.gz"}
https://googology.wikia.org/wiki/Tetration?oldid=310554
11,053 Pages Tetration, also known as hyper4, superpower, superexponentiation, superdegree, powerlog, or power tower,[1] is a binary mathematical operator (that is to say, one with just two inputs), defined as $$^yx = x^{x^{x^{\cdot^{\cdot^{\cdot}}}}}$$ with $$y$$ copies of $$x$$. In other words, tetration is repeated exponentiation. Formally, this is $^0x=1$ $^{n + 1}x = x^{^nx}$ where $$n$$ is a nonnegative integer. Tetration is the fourth hyper operator, and the first hyper operator not appearing in mainstream mathematics. When repeated, it is called pentation. If $$c$$ is a non-trivial constant, the function $$a(n) = {}^nc$$ grows at a similar rate to $$f_3(n)$$ in FGH. Daniel Geisler has created a website, IteratedFunctions.com (formerly tetration.org), dedicated to the operator and its properties. ## Basis Addition is defined as repeated counting: $x + y = x + \underbrace{1 + 1 + \ldots + 1 + 1}_y$ Multiplication is defined as repeated addition: $x \times y = \underbrace{x + x + \ldots + x + x}_y$ Exponentiation is defined as repeated multiplication: $x^y = \underbrace{x \times x \times \ldots \times x \times x}_y$ Analogously, tetration is defined as repeated exponentiation: $^yx = \underbrace{x^{x^{x^{.^{.^.}}}}}_y$ But since exponentiation is not an associative operator (that is, $$a^{b^{c}}$$ is generally not equal to $$\left(a^b\right)^c = a^{bc}$$), we can also group the exponentiation from bottom to top, producing what Robert Munafo calls the hyper4 operator, written $$x_④y$$. $$x_④y$$ reduces to $$x^{x^{y - 1}}$$, which is not as mathematically interesting as the usual tetration. This is equal to $$x \downarrow\downarrow y$$ in down-arrow notation. ### Notations Tetration was independently invented by several people, and due to lack of widespread use it has several notations: • $$^yx$$ is pronounced "to-the-$$y$$ $$x$$" or "$$x$$ tetrated to $$y$$." The notation is due to Rudy Rucker, and is most often used in situations where none of the higher operators are called for. • Robert Munafo uses $$x^④y$$, the hyper4 operator. • In arrow notation it is $$x \uparrow\uparrow y$$, nowadays the most common way to denote tetration. • In chained arrow notation it is $$x \rightarrow y \rightarrow 2$$. • In BEAF it is $$\{x, y, 2\}$$[2]. • Using Bowers's extended operators, it is $$x\ \{2\}\ y$$. • In Hyper-E notation it is E[x]1#y (alternatively x^1#y). • In plus notation it is $$x ++++ y$$. • In star notation (as used in the Big Psi project) it is $$x *** y$$.[3] • An exponential stack of n 2's was written as E*(n) by David Moews, the man who held Bignum Bakeoff. • Harvey Friedman uses $$x^{[y]}$$. ## Properties Tetration lacks many of the symmetrical properties of the lower hyper-operators, so it is difficult to manipulate algebraically. However, it does have a few noteworthy properties of its own. ### Power identity It is possible to show that $${^ba}^{^ca} = {^{c + 1}a}^{^{b - 1}a}$$: ${^ba}^{^ca} = (a^{^{b - 1}a})^{(^ca)} = a^{^{b - 1}a \cdot {}^ca} = a^{^ca \cdot {}^{b - 1}a} = (a^{^ca})^{^{b - 1}a} = {^{c + 1}a}^{^{b - 1}a}$ For example, $${^42}^{^22} = {^32}^{^32} = 2^{64}$$. ### Moduli of power towers Be careful that there were many unsourced (and perhaps wrong) statements on last digits of tetration in this wiki. For example, the last $$d$$ digits $$N(y)$$ of $$^yx$$ in base $$b$$ was believed to be computed by the following wrong recursive formula in this community up to December in 2019: \begin{eqnarray*} N(m) = \left\{ \begin{array}{ll} 1 & (m=0) \\ x^{N(m-1)} \mod{b^d} & (m>0) \end{array} \right. \end{eqnarray*} Here, $$s \mod{t}$$ denotes the remainder of the division of $$s$$ by $$t$$. For example, we have the following counterexamples: • If $$b = 10$$ and $$d = 1$$, then the formula computes the last 1 digit of $$2 \uparrow \uparrow 4$$ in base $$10$$ as $$2^{16 \mod{10}} \mod{10} = 2^6 \mod{10} = 64 \mod{10} = 4$$, while the last 1 digit of $$2 \uparrow \uparrow 4 = 2^{16} = 65536$$ in base $$10$$ is $$6$$. • If $$b = 10$$ and $$d = 2$$, then the formula computes the last 2 digits of $$100 \uparrow \uparrow 2$$ in base $$10$$ as $$100^{100 \mod{10^2}} \mod{10^2} = 100^0 \mod{10^2} = 1$$ (i.e. $$01$$), while the last 2 digits of $$100 \uparrow \uparrow 2 = 100^{100} = 1000 \cdots 000$$ in base $$10$$ is $$00$$. • If $$b = 15$$ and $$d = 1$$, then computes the last 1 digit of $$2 \uparrow \uparrow 4$$ in base $$15$$ as $$2^{16 \mod{15}} \mod{15} = 2^1 \mod{15} = 2$$, while the last 1 digit of $$2 \uparrow \uparrow 4 = 65536$$ in base $$15$$ is $$1$$. The method works when $$x$$ is coprime with $$b$$ and $$b^d$$ is divisible by the least common multiple of $$\phi(p^d) = (p-1)p^{d-1}$$ for prime factors $$p$$ of $$b$$ by Euler's theorem and Chinese remainder theorem, where $$\phi$$ denotes Euler function. For example, when $$b = 10$$ and $$d \geq 3$$, then the condition holds. It is obvious that it does not hold when $$b$$ is an odd prime number. Even if $$b$$ is not a prime number, the method does not necessarily hold as we have seen the counterexamples $$(x,b,d) = (2,10,1)$$, where $$b^d = 10$$ is not divisible by the least common multiple $$4$$ of $$\phi(2^d) = 1$$ and $$\phi(5^d) = 4$$, and $$(x,b,d) = (2,15,1)$$, where $$b^d = 15$$ is not divisible by the least common multiple $$4$$ of $$\phi(3^d) = 2$$ and $$\phi(5^d) = 4$$. In general, the condition never holds when $$b$$ is an odd number. In particular, the method works when $$x$$ is coprime with $$b$$ and one of the following holds: • $$b = 2^h$$ for some $$h \geq 1$$. • $$b = 2^h \times 3^i$$ for some $$h,i \geq 1$$. • $$b = 2^h \times 5^i$$ for some $$h,i \geq 1$$, and $$dh \geq 2$$. • $$b = 2^h \times 3^i \times 5^j$$ for some $$h,i,j \geq 1$$, and $$dh \geq 2$$. • $$b = 2^h \times 3^i \times 7^j$$ for some $$h,i,j \geq 1$$. If the method works, then the modular exponentiation can be computed very quickly using several algorithms such as right-to-left binary method. The condition that $$x$$ is coprime to $$b$$ is important as we have the counterexample $$(x,b,d) = (100,10,2)$$, although it is often carelessly dropped in this wiki. Further, although it is sometimes believed that the recursive formula works when $$d$$ is sufficiently large, $$x = b^d$$ is always a counterexample of the recursive formula for any $$b \geq 2$$ and $$d \geq 1$$ because $$^2 x \mod{b^d} = 0$$ and $$x^{x \mod{b^d}} \mod{b^d} = x^0 \mod {b^d} = 1 \mod{b^d} = 1$$. Even if $$x$$ is not necessarily coprime with $$b$$, the last $$1$$ digit $$D_b(x,m)$$ of $$x^m$$ in base $$b$$ for an $$m \in \mathbb{N}$$ can be quite easily computed in the case $$b$$ is squarefree. First, consider the case where $$b$$ is a prime number $$p$$. Then $$D_p(x,m)$$ is computed in the following recursive way by Fermat's little theorem: 1. Denote by $$S_p(m)$$ the sum of all digits of $$m$$ in base $$p$$. 2. Then $$D_p(x,m) = D_p(x \mod{p},S_p(m))$$. Since $$(S_p^n(m))_{n=0}^{\infty}$$ is a decreasing sequences converges to a non-negative integer $$S_p^{\infty}(m)$$ smaller than $$p$$, the computation of $$D_p(x,m)$$ is quickly reduced to the computation of $$D_p(x \mod{p},S_p^{\infty}(m)) \in \{(x \mod{p})^i \mod{p} \mid 0 \leq i < p\}$$. Now consider the general case. Then $$D_b(m)$$ is computed in the following recursive way by Chinese remainder theorem: 1. Denote by $$p_0,\ldots,p_k$$ the enumeration of prime factors of $$b$$. 2. If $$k = 0$$, then we have done the computation of $$D_b(x,m) = D_{p_0}(x,m)$$ by the algorithm above. 3. Suppose $$k > 0$$. 1. Put $$b' = p_0 \cdots p_{k-1}$$. (Since $$b'$$ is also squarefree, $$D_{b'}(x,m)$$ is also computed in this algorithm by the induction on $$k$$.) 2. Take an $$(s,t) \in \mathbb{Z}^2$$ such that $$sb' + tp_k = 1$$ by Euclidean algorithm. (Such an $$(s,t)$$ actually exists because $$b'$$ is coprime with $$p_k$$) 3. Then $$D_b(x,m) = (sb'D_{p_k}(x,m)+tp_kD_{b'}(x,m)) \mod{b}$$. Be careful that this method does not work when $$b$$ is not squarefree or the length $$d$$ of digits is greater than $$1$$, while such restricted methods are often "extended" to general cases just by dropping conditions without theoretic justification in this community as long as people do not know explicit counterexamples, as we have explained above. For the case where $$b$$ is not necessarily squarefree or the length $$d$$ of digits is not necessarily equal to $$1$$, we need another method to use a shifted system of representatives of $$\mathbb{Z}/b^d \mathbb{Z}$$ other than $$\{0,\ldots,b^d-1\}$$. Suppose that $$b^d$$ is divisible by the least common multiple of $$\phi(p^d)$$ for prime factors $$p$$ of $$b$$. Let $$\mu \in \mathbb{N}$$ denote the maximum of multiplicities of prime factors of $$b$$. For an $$n \in \mathbb{N}$$, we define $$\textrm{Rep}(n) \in \mathbb{N}$$ in the following way: 1. If $$n < \mu d$$, then $$\textrm{Rep}(n) = n$$. 2. If $$n \geq \mu d$$, then $$\textrm{Rep}(n)$$ is the unique $$i \in \{\mu d,\mu d + 1,\ldots,\mu d + b^d - 1\}$$ such that $$(n-i) \mod{b^d} = 0$$. Then even if $$x$$ is not necessarily coprime with $$b$$, we have $$\textrm{Rep}(x^m) = \textrm{Rep}(x^{\textrm{Rep}(m)})$$ unless $$x^{\textrm{Rep}(m)} < \mu d \leq x^m$$ by Euler's theorem and Chinese remainder theorem. Now we compute $$\textrm{Rep}(^yx)$$. If $$x = 1$$, then we have $$\textrm{Rep}(^yx) = 1$$. Suppose $$x > 1$$. Let $$y_0 \in \mathbb{N}$$ denote the least natural number such that $$\mu d \leq {}^{y_0}x$$. Then $$\textrm{Rep}(^yx)$$ can be computed in the following way: 1. If $$y < y_0$$, then $$\textrm{Rep}(^yx) = {}^yx$$. (Since the right hand side is smaller than $$\mu d$$ by the definition of $$y_0$$, it is possible to compute the precise value in a reasonable amount of time as long as $$b$$ and $$d$$ are not so large.) 2. If $$y \geq y_0$$, then $$\textrm{Rep}(^yx) = \textrm{Rep}(x^{\textrm{Rep}(^{y-1}x)}+b^d)$$. In particular, the last $$d$$ digit of $$^yx$$ in base $$b$$ can be computed as $$\textrm{Rep}(^yx) \mod{b^d}$$ because $$(^yx - \textrm{Red}(^yx)) \mod{b^d} = 0$$. There was another wrong method to compute the last digits of Graham's number, which was believed to be correct without any reason in this community. Due to the method, for any natural number $$d$$, the last $$d$$ digits $$D(d)$$ of Graham's number in base $$10$$ could be computed in the following recursive way: \begin{eqnarray*} D(d) & = & \left\{ \begin{array}{ll} 3 & (d = 0) \\ 3^{N(d-1)} \text{ mod } 10^d & (d > 0) \end{array} \right. \end{eqnarray*} Note that the recursion starts from $$d = 0$$, but the result of $$D(0)$$ is not meaningful because "the last $$0$$ digit" of Graham's number does not make sense. For the obvious reason why this method is wrong, see the main article on Graham's number. It can be quite easily justified by modifying the upperbound of the length $$d$$ of the digits. Let $$b$$, $$x$$, and $$y$$ be positive integers such that $$b$$ is coprime to $$x$$. We define a natural number $$D_b(x,y)$$ in the following recursive way: \begin{eqnarray*} D_b(x,y) & = & \left\{ \begin{array}{ll} x^x \text{ mod } b & (y = 1) \\ x^{D_b(x,y-1)} \text{ mod } b^y & (y > 1) \end{array} \right. \end{eqnarray*} We have $$D_{10}(3,y) = D(y)$$ for any positive integer $$y$$, and hence this is an extension of the unary $$D$$ function above. It is natural to ask when $$D_b(x,d)$$ coincides with the last $$d$$ digits of $${}^y x$$ for positive integers $$d$$ and $$y$$. The wrong method was based on the wrong assumption that $$D_{10}(3,d)$$ coincides with the last $$d$$ digits of Graham's number for any positive integer $$d$$. Assume that there exists a pair $$(d_0,y_0)$$ of positive integers such that for $$b^{d_0}$$ is divisible by the least common divisor of $$\phi(p^{d_0+1})$$ for all prime factors $$p$$ of $$b$$ and $$D_b(x,d_0)$$ coincides with the last $$d_0$$ digits of $${}^{y_0} x$$. In that case, $$D_b(x,d)$$ coincides with the last $$d$$ digits of $${}^{y_0+d-d_0} x$$ for any positive integer $$d$$ greater than or equal to $$d_0$$ by the argument for the $$N$$ function for base $$b$$ above and the induction on $$d$$, because $$b^d$$ is divisible by the least common divisor of $$\phi(p^d)$$ for all prime factors $$p$$ of $$b$$. Note that the condition on $$d_0$$ can be weakened by Carmichael's theorem. Moreover, under the same assumption of the existence of such a pair $$(d_0,y_0)$$, we have \begin{eqnarray*} & & (D_b(x,d+2) - D_b(x,d+1)) \text{ mod } b^{d+1} = (b^{D_b(x,d+1)} - b^{D_b(x,d)}) \text{ mod } b^{d+1} \\ & = & (b^{D_b(x,d+1) \text{ mod } b^d} - b^{D_b(x,d)}) \text{ mod } b^{d+1} \end{eqnarray*} for any positive integer $$d$$ greater than or equal to $$d_0$$. Therefore in addition if there exists a positive integer $$d_1$$ greater than or equal to $$d_0$$ satisfying $$D_b(x,d_1+1) \text{ mod } b^{d_1} = D_b(x,d_1)$$, then we obtain $$D_b(x,d+1) \text{ mod } b^d = D_b(x,d)$$ for any positive integer $$d$$ greater than or equal to $$d_1$$ by induction on $$d$$. In particular, it implies $$D_b(x,y) \text{ mod } b^d = D_b(x,d)$$ for any positive integers $$y$$ and $$d$$ satisfying $$y_0 \leq y$$ and $$d_1 \leq d \leq y$$. Thus $$D_b(x,d)$$ coincides with the last $$d$$ digits of $${}^y x$$ for any positive integers $$y$$ and $$d$$ satisfying $$y_0 \leq y$$ and $$d_1 \leq d \leq d_0+y-y_0$$. Be careful that this argument does not imply that this upperbound $$d_0+y-y_0$$ of $$d$$ is the best one even if we choose $$d_0$$ and $$d_1$$ as the least ones. In particular, when $$b = 10$$, then the tuple $$(d_0,y_0,d_1) = (2,3,2)$$ satisfies the conditions. Therefore $$D(d) = D_{10}(3,y)$$ coincides with the last $$d$$ digits of $${}^y x$$ for any positive integer $$d$$ greater than or equal to $$2$$ and any positive integer $$y$$ greater than or equal to $$d+1$$. For example, if we present Graham's number as $${}^{y_1} 3$$ for a unique positive integer $$y_1$$, the last $$d$$ digits of Graham's number coincides with $$D(d)$$ for any positive integer $$d$$ greater than or equal to $$d_0+y-y_0 = y_1-1$$. The last condition was ignored in the wrong belief in this community. Moreover, as we clarified above, this argument does not imply that the upperbound $$y_1-1$$ of $$d$$ is the best one. If there exists another tuple $$(d_0,y_0,d_1)$$ satisfying the conditions and $$y_0 \leq \min \{d_1+1,y_1\}$$, then $$D(d)$$ coincides with the last $$d$$ digits of Graham's number for any positive integer $$d$$ smaller than or equal to $$d_0+y_1-y_0$$, which is greater than the lowerbound $$y_1-1$$. In order to argue on the non-existence of such a tuple $$(d_0,y_0,d_1)$$, we need to compute the discrete logarithm on base $$x = 3$$ or some alternative invariants, although Googology wiki user Allam948736 strongly believes that he has succeeded in proving that $$y_1-1$$ is the best upperbound of $$d$$ by arguments only on $$d$$ and $$y$$.[4][5] Since there is no written proof by him, the non-existence is still an open question. ### First digits Computing the first digits of $$^yx$$ in a reasonable amount of time for $$y > 6$$ and $$x \geq 2$$ is probably impossible. In base 10: $a^b = 10^{b \log_{10} a} = 10^{\text{frac}(b \log_{10} a) + \lfloor b \log_{10} a \rfloor} = 10^{\text{frac}(b \log_{10} a)} \times 10^{\lfloor b \log_{10} a \rfloor}$ Here $$\lfloor x \rfloor$$ denotes the integer part of $$x$$, i.e. the greatest integer which is not greater than $$x$$, and $$\text{frac}(x)$$ denotes the fractional part of $$x$$, i.e. $$x - \lfloor x \rfloor \in [0,1)$$. The leading digits of $$^ba$$ are then $$10^{\text{frac}(^{b - 1}a \log_{10} a)}$$, so the problem is finding the fractional part of $$^{b - 1}a \log_{10} a$$. This is equivalent to finding arbitrary base-$$a$$ digits of $$\log_{10} a$$ starting at the $$^{b - 2}a$$th place. The most efficient known way to do this is a BBP algorithm, which, unfortunately, requires linear time to operate and works only with radixes that are powers of 2. We need an algorithm at least as efficient as $$O(\log^*n)$$ (where $$\log^*n$$ is the iterated logarithm), and it is unlikely that one exists. This roadblock ripples through the rest of the hyperoperators. Even if we do find a $$O(\log^*n)$$ algorithm, it becomes unworkable at the pentational level. A constant time algorithm is needed, and finding such an algorithm would take a miracle. Nonetheless, it is still possible to calculate the first digits of $$^yx$$ for small values of $$y$$. Below are some examples: $$^52$$: 200352993040684646497907235156... $$^62$$: 212003872880821198488516469166...[6] $$^43$$: 125801429062749131786039069820...[7] $$^44$$: 236102267145973132068770274977...[8] $$^45$$: 111102880817999744528617827418...[9] $$^49$$: 214198329479680561133336437344...[10] ## Generalization ### For non-integral $$y$$ Mathematicians have not agreed on the function's behavior on $$^yx$$ where $$y$$ is not an integer. In fact, the problem breaks down into a more general issue of the meaning of $$f^t(x)$$ for non-integral $$t$$. For example, if $$f(x) := x$$$$!$$, what is $$f^{2.5}(x)$$? Stephen Wolfram was very interested in the problem of continuous tetration because it may reveal the general case of "continuizing" discrete systems. Daniel Geisler describes a method for defining $$f^t(x)$$ for complex $$t$$ where $$f$$ is a holomorphic function over $$\mathbb{C}$$ using Taylor series. This gives a definition of complex tetration that he calls hyperbolic tetration. ### As $$y \rightarrow \infty$$ One function of note is infinite tetration, defined as $^\infty x = \lim_{n\rightarrow\infty}{}^nx$ If we mark the points on the complex plane at which $$^\infty x$$ becomes periodic (as opposed to escaping to infinity), we get an interesting fractal. Daniel Geisler studied this shape extensively, giving names to identifiable features. ### For ordinals There have been many attempts to extend tetration to transfinite ordinals, including by Jonathan Bowers[11] and Sbiis Saibian. However, a common problem occurs at $$\omega\uparrow\uparrow(\omega+1)$$ because of the successor case $$\omega^{\omega\uparrow\uparrow\omega}=\omega^{\varepsilon_0}=\varepsilon_0$$ of the transfinite induction. Bowers's method of passing this problem is also known as the climbing method, but has not been formalised. In other words, Bowers' method is currently ill-defined. ## Examples Here are some small examples of tetration in action: • $$^22 = 2^2 = 4$$ • $$^32 = 2^{2^2} = 2^4 = 16$$ • $$^23 = 3^3 = 27$$ • $$^33 = 3^{3^3} = 3^{27} =$$ $$7,625,597,484,987$$ • $$^42 = 2^{2^{2^2}} = 2^{2^4} = 2^{16} = 65,536$$ • $$^35 = 5^{5^5} \approx 1.9110125979 \cdot 10^{2,184}$$ • $$^52 = 2^{2^{2^{2^2}}} \approx 2.00352993041 \cdot 10^{19,728}$$ • $$^310 = 10^{10^{10}} = 10^{10,000,000,000}$$ • $$^43 = 3^{3^{3^3}} \approx 10^{10^{10^{1.11}}}$$ When given a negative or non-integer base, irrational and complex numbers can occur: • $$^2{-2} = (-2)^{(-2)} = \frac{1}{(-2)^2} = \frac{1}{4}$$ • $$^3{-2} = (-2)^{(-2)^{(-2)}} = (-2)^{1/4} = \frac{1 + i}{\sqrt[4]{2}}$$ • $$^2(1/2) = (1/2)^{(1/2)} = \sqrt{1/2} = \frac{\sqrt2}{2}$$ • $$^3(1/2) = (1/2)^{(1/2)^{(1/2)}} = (1/2)^{\sqrt{2}/2}$$ Functions whose growth rates are on the level of tetration include: ## Super root Let $$k$$ be a positive integer. Since ka is well-defined for any non-negative real number a and is a strictly increasing unbounded function, we can define a root inverse function $$sr_k \colon [0,\infty) \to [0,\infty)$$ as: $$sr_k(n) = x \text{ such that } ^kx = n$$ ### Numerical evaluation The second-order super root can be calculated as: $$\frac{ln(x)}{W(ln(x))}$$ where $$W(n)$$ is the Lambert W function. Formulas for higher-order super roots are unknown.[13] ## Pseudocode Below is an example of pseudocode for tetration. function tetration(a, b): result := 1 repeat b times: result := a to the power of result return result
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841710925102234, "perplexity": 165.71832238027662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00499.warc.gz"}
http://mathhelpforum.com/differential-geometry/188005-theorem-about-covers.html
# Math Help - Theorem about covers Theorem. Let $D \subset Y$. Suppose that D is non-empty and sequentially compact. Let C be an open cover of D. Then there is a real number $\epsilon > 0$ such that if $E \subset D$ with $d(E)<\epsilon$ then $E \subset A_{\alpha}$ for some $A_{\alpha}\in C$. Note that d(E) is the diameter of E. This seems so intuitively obvious to me... or am I not reading it correctly? Can't I just choose $\epsilon$ small enough that E is approximately a point? 2. ## Re: Theorem about covers Originally Posted by paupsers Theorem. Let $D \subset Y$. Suppose that D is non-empty and sequentially compact. Let C be an open cover of D. Then there is a real number $\epsilon > 0$ such that if $E \subset D$ with $d(E)<\epsilon$ then $E \subset A_{\alpha}$ for some $A_{\alpha}\in C$. Note that d(E) is the diameter of E. This seems so intuitively obvious to me... or am I not reading it correctly? Can't I just choose $\epsilon$ small enough that E is approximately a point? This is far from a trivial result. It is known as proving that sequentially compact metric space has Lebesgue number. Suppose not. If $n\in\mathbb{Z}^+$ there must be a set $B_n$ such that $0 but $B_n$ is a subset of no open set in the cover. See what you can do with that setup.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924339056015015, "perplexity": 208.84768110169156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445159.36/warc/CC-MAIN-20141017005725-00005-ip-10-16-133-185.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/232824/bayesian-updating-with-conjugate-priors-using-the-closed-form-expressions
# Bayesian updating with conjugate priors using the closed form expressions I have one two data sets of scalar values: one large data set (about 700 data points) and one small data set (80 data points). I would like to update the large data set with the small one using the Bayes’ theorem, and so create another large data set (posterior). The large data set serves as prior, and it is assumed to be normally distributed and so the posterior. This was motivated by the existence of the closed-form expression of the posterior distribution parameters https://en.wikipedia.org/wiki/Conjugate_prior (the first row in the table for Continuous distributions) for the conjugate prior. However, if I substitute into the closed-form expressions for posterior mean and variance, using the mean and variance values of prior (inferred from the large data set) and local data (inferred from the small data), the resulting posterior distribution does not make sense. Do I misunderstand that I can simply substitute into these closed-form expressions the known values in order to get the posterior distribution? • Could you give an example of what you are actually doing..? What do you substitute? – Tim Sep 1 '16 at 10:11 • From the large set I observed mean=20.1, std=7.9 and from the small set: mean=20.6, std=9.5. I've simply taken these observed values, considering the large set as my prior, and substitute them into the formulas for the posterior's mean and std. – Vasek Sep 1 '16 at 10:20 • What exactly and how do you substitute? And why "doesn't it make sens"? – Tim Sep 1 '16 at 10:29 • Using the same notations as in en.wikipedia.org/wiki/Conjugate_prior, I substitute mu0=20.1, sigma0=7.9 and mu=20.6, sigma=9.5, n=70 (n is the size of the small sample), sum(xi)= 1448.7 into the formulations for the posterior parameters (the first row in the table Continuous distributions) and receive the posterior parameters mu=20.7 and sigma=1.2. Now, mu seems to follow some expected value, however, sigma is far off comparing to the values from the data sets. – Vasek Sep 1 '16 at 11:19 First of all, the formulas are defined in terms of variance, not standard deviations. Second, the variance of the posterior is not a variance of your data but variance of estimated parameter $\mu$. As you can see from the description ("Normal with known variance $\sigma^2$"), this is formula for estimating $\mu$ when $\sigma^2$ is known. The prior parameters $\mu_0$ and $\sigma_0^2$ are parameters of distribution of $\mu$, hence the assumed model is \begin{align} X_i &\sim \mathrm{Normal}(\mu, \sigma^2) \\ \mu &\sim \mathrm{Normal}(\mu_0, \sigma_0^2) \end{align} When both $\mu$ and $\sigma^2$ are unknown and are to be estimated, then you need slightly more complicated model (in Wikipedia table under "$\mu$ and $\sigma^2$ Assuming exchangeability"): \begin{align} X_i &\sim \mathrm{Normal}(\mu, \sigma^2) \\ \mu &\sim \mathrm{Normal}(\mu_0, \tfrac{\sigma^2}{n+\nu}) \\ \sigma^2 &\sim \mathrm{IG}(\alpha, \beta) \end{align} where first we need to update parameters of inverse gamma distribution to obtain $\sigma^2$: \begin{align} \alpha' &= \alpha + \frac{n}{2} \\ \beta' &= \beta + \frac{1}{2}\sum_{i=1}^n (x_i -\bar x)^2 + \frac{n\nu(\bar x -\mu_0)^2}{2(n+\nu)} \end{align} and then we can proceed to calculate $\mu$ and MAP point estimate for $\sigma^2$: \begin{align} \mu &= \frac{ \mu_0\nu + \bar x n }{\nu + n} \\ \operatorname{Mode}(\sigma^2) &= \frac{ \beta' }{ \alpha' + 1 } \end{align} For learning more, refer to "Conjugate Bayesian analysis of the Gaussian distribution" paper by Kevin Murphy, or "The Conjugate Prior for the Normal Distribution" notes by Michael Jordan (notice that there are slight differences between those two sources and that some formulas are given for precision $\tau$ rather then variance) and M. DeGroot Optimal Statistical Decisions, McGraw-Hill, 1970 (pp. 169-171). • Thank you. This is useful!! My follow-up question is, how should one decide on σ2 value? In my application where I'm "updating" the large data set with the properties of the small data set, the posterior (resulting) variance is not known. – Vasek Sep 1 '16 at 14:59 • @Vasek If it is not known then you should not use this approach... Basically, this is just a simple "textbook" example of usage of conjugate priors and most real-life application would require more complicated models (e.g. estimated using MAP or MCMC). – Tim Sep 1 '16 at 15:02 • Thanks for explaining. I'll then have to find some "textbook" (meaning something practical) explanation on how to proceed such more complicated approach for the application in interest. It's sort surprising conclusion; I'm basically replicating the methodology of some authors who (in multiple papers) used this conjugate priors closed-form expressions, explicitly mentioning that this way they might overcome the need of employing MCMC samplers. – Vasek Sep 1 '16 at 15:17 • Well, here we want to "simulate" an outcome (posterior). Apriori, we do not know any of its parameters such that the closed form expressions could be operationalized (as you explained previously). Unless one assumes some asymptotical cases when one of the posterior parameters is exactly equal to the related parameter in the observed data sets (and the remaining parameter could be then computed). So it's not really obvious how they come up with the results without using some samplers. But this is perhaps a question to the authors of methodology that I mentioned earlier. – Vasek Sep 1 '16 at 16:18 • – Vasek Sep 2 '16 at 6:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954728484153748, "perplexity": 1018.6066380570948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998882.88/warc/CC-MAIN-20190619023613-20190619045613-00410.warc.gz"}
https://deepai.org/publication/unbiased-and-consistent-nested-sampling-via-sequential-monte-carlo
Unbiased and Consistent Nested Sampling via Sequential Monte Carlo We introduce a new class of sequential Monte Carlo methods called Nested Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested Sampling method of Skilling (2006) in terms of sequential Monte Carlo techniques. This new framework allows one to obtain provably consistent estimates of marginal likelihood and posterior inferences when Markov chain Monte Carlo (MCMC) is used to produce new samples. An additional benefit is that marginal likelihood estimates are also unbiased. In contrast to NS, the analysis of NS-SMC does not require the (unrealistic) assumption that the simulated samples be independent. As the original NS algorithm is a special case of NS-SMC, this provides insights as to why NS seems to produce accurate estimates despite a typical violation of its assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels in an automated manner via a preliminary pilot run, and present a new method for appropriately choosing the number of MCMC repeats at each iteration. Finally, a numerical study is conducted where the performance of NS-SMC and temperature-annealed SMC is compared on several challenging and realistic problems. MATLAB code for our experiments is made available at https://github.com/LeahPrice/SMC-NS. Authors • 5 publications • 6 publications • 5 publications • 2 publications • Data analysis recipes: Using Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) methods for sampling probability density... 10/17/2017 ∙ by David W. Hogg, et al. ∙ 0 • Motor Unit Number Estimation via Sequential Monte Carlo A change in the number of motor units that operate a particular muscle i... 04/11/2018 ∙ by Simon Taylor, et al. ∙ 0 • Inference Trees: Adaptive Inference with Exploration We introduce inference trees (ITs), a new class of inference methods tha... 06/25/2018 ∙ by Tom Rainforth, et al. ∙ 2 The celebrated Monte Carlo method estimates a quantity that is expensive... 05/21/2018 ∙ by Vivek Bagaria, et al. ∙ 0 • New Insights into History Matching via Sequential Monte Carlo The aim of the history matching method is to locate non-implausible regi... 10/09/2017 ∙ by Christopher C Drovandi, et al. ∙ 0 • Normalizing Constant Estimation with Gaussianized Bridge Sampling Normalizing constant (also called partition function, Bayesian evidence,... 12/12/2019 ∙ by He Jia, et al. ∙ 0 • INLA-MRA: A Bayesian method for large spatiotemporal datasets Large spatiotemporal datasets are a challenge for conventional Bayesian ... 04/21/2020 ∙ by Luc Villandré, et al. ∙ 0 Code Repositories SMC-NS Unbiased and Consistent Nested Sampling via Sequential Monte Carlo This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. 2 Nested Sampling Nested Sampling (NS) (Skilling, 2006) is based on the identity Z=∫Eη(x)L(x)dx=Eη[L(X)]=∫∞0P(L(X)>l)dl, (4) where is a function mapping from some space to , and . Note that is simply the tail cdf (survival function) of the random variable . We denote this survival function by . A simple inversion argument yields ∫∞0¯¯¯¯FL(X)(l)dl=∫10¯¯¯¯F−1L(X)(p)dp, (5) where is the quantile function of the likelihood under . This simple one-dimensional representation suggests that if one had access to the function , the integral could then be approximated by numerical methods. For example, for a discrete set of values, , one could compute the Riemann sum T∑t=1(pt−1−pt)¯¯¯¯F−1L(X)(pt), (6) as a (deterministic) approximation of . Unfortunately, the quantile function of interest is often intractable. NS provides an approximate way of performing quadrature such as (6). The core insight underlying NS is as follows. For independent samples from a density of the form η(x;l):=η(x)I{L(x)>l}Pη(L(X)>l),x∈E,l∈R+, (7) we have that ¯¯¯¯FL(X)(minkL(Xk))¯¯¯¯FL(X)(l)∼Beta(N,1). (8) Put simply, consider that one has independent samples distributed according to the prior subject to a given likelihood constraint, and then introduces a new constraint determined by choosing the minimum likelihood value of the samples. This will define a region that has less (unconstrained) prior probability by a factor that has a distribution. As samples from this new distribution will be compressed into a smaller region of the original prior, (8) is often referred to as a compression factor. With this in mind, Skilling (2006) proposes the NS procedure that proceeds as follows. Initially, a population of independent samples (henceforth called particles) are drawn from . Then, for each iteration , the particle with the smallest value of is identified. This “worst performing” particle at iteration is denoted by and its likelihood value by . Finally, this particle is moved to a new position that is determined by drawing a sample according to . By construction, this procedure results in a population of samples from that is constrained to lie above higher values of at each iteration. After iterations, we then have . Each corresponds to an unknown satisfying . Skilling proposes to (deterministically) approximate the values by assuming that at each iteration the compression factor (8) is equal to its geometric mean, i.e., . Thus, we have the approximation . This is the most popular implementation, and thus will be the version we consider for the remainder of this paper; however, it is worth noting that there exists another variant which randomly assigns at each iteration, where . With the pairs in hand, the numerical integration is then of the form ˆZ=T∑t=1(pt−pt−1)LtˆZt. (9) In practice, the number of iterations is not set in advance, but rather the iterative sampling procedure is repeated until some termination criterion is satisfied. The standard approach is to continue until , where is some small value, say . This choice attempts to ensure that the remaining integral is sufficiently small so that error arising from omission of the final in the quadrature is negligible. In addition to estimates of the model evidence , estimates of posterior expectations , as in (1), can be obtained by assigning to each the weight , and using T∑t=1φ(˘Xt)wt/T∑t=1wt, (10) as an estimator. A formal justification for this is given in Chopin and Robert (2010, Section 2.2), though in essence it is based on the fact that the numerator and denominator of (10) are (NS) estimators of their corresponding terms in the identity (11) Pseudocode for NS is provided in Algorithm 1. While the estimator (10) bears some resemblance to importance sampling (which is introduced in Section 4) in its use of a ratio estimator and weighted samples, it is not precisely the same. 3 Why Isn’t Nested Sampling More Popular With Statisticians? There are several potential issues with NS that we speculate are the reasons why it has not achieved mainstream adoption in the statistics community. We outline what we believe to be the main four objections to NS below (in decreasing order of severity), as well as a discussion on relevant works that have attempted to address them. 1. Assumption of Independent Samples. The property (8) requires at each iteration independent samples with the correct distribution. This is a strong condition, as generating samples from constrained densities of the form (7) is in general difficult. The sampling method originally proposed is to move the worst performing particle at each iteration to the position of one of the other particles, and then run an MCMC algorithm for sufficiently many iterations to create an (approximately) independent sample. This procedure itself does not ensure the assumption of independence is satisfied, as it only produces independent samples asymptotically in the number of MCMC iterations. Moreover, for problems with likelihoods that have multiple well-separated modes, the constrained density will have increasingly isolated islands of support as the algorithm progresses, making it difficult for most samplers to cross between modes in any reasonable amount of time. Thus, even approximate independence may be difficult to achieve (and verify) in practice. Indeed, now over a decade after the introduction of the NS method, establishing consistency when MCMC transitions are used for sampling with NS remains a challenging open problem. Chopin and Robert (2010) remark that “a reason why such a theoretical result seems difficult to establish is that each iteration involves both a different MCMC kernel and a different invariant distribution”. In order to overcome the need for MCMC sampling, they propose a variant of NS for which the sampling can be performed exactly, and that demonstrate it can perform well in low dimensional problems for which is approximately Gaussian. In a separate attempt to overcome dependency between samples, there is a class of approximate sampling methods called region sampling that attempts to generate independent samples by reparameterizing the problem so the constrained sampling problem is one of sampling uniformly within constrained regions of a unit hypercube. The most popular of these methods is MultiNest (Feroz and Hobson (2008)), which uses the population of particles to construct a region that is a union of hyperellipsoids, sampling from this region, and accepting samples which satisfy the constraint. There is no way however to ensure the proposal region is a superset of the actual region. Buchner (2016) proposes a method that is more robust (but still not immune) to this problem; however, the results show it can be an order of magnitude more inefficient and is more susceptible to the curse of dimensionality. 2. Effect of Quadrature on Posterior Inferences. As shown in (10), the ratio of two NS estimators (from a single run) can be used for posterior inferences. However, the precise effect of the use of quadrature in both estimators on estimates of is not well understood. The algorithm replaces the integral of over a (random) shell with a single value, and assigns a volume to that shell according to a geometric expectation. To our knowledge, the only work toward better understanding this unique form of error is that of Higson et al (2018), which quantifies it through bootstrapping techniques. 3. Parallelization. While NS can be parallelized across runs, NS does not allow one to make use of parallel computing architectures within runs without modifying the algorithm. The most natural way to parallelize NS, first proposed by Burkoff et al (2012) is as follows. If we generalize (8) to consider the –th order statistic instead of simply the minimum , then has a distribution, with expectation . Thus, at each iteration, we can instead remove the points with the lowest likelihood, set , and parallelize the sampling across threads. The approach will not only increase the bias of the algorithm by introducing additional quadrature error, but will also compound the problem mentioned in the previous issue (as a single value will now be used to represent the mean of a larger shell). 4. Truncation Error. Finally, of lesser concern, yet still worth noting is that NS commits an truncation error (Evans, 2007) in the evidence estimate as a result of not performing quadrature on the entire interval. A heuristic originally proposed by Skilling, which we call the filling in procedure is to simply add after termination to the final evidence estimate. However, this is somewhat out of place with the rest of the quadrature. Using point process theory and techniques from the literature on unbiased estimation, Walter (2017) proposes an unbiased version of NS. However, this unbiasedness relies on the assumption of independent sampling, and comes with a cost of additional variance. As mentioned earlier, all of these potential issues stem from the use of quadrature in NS. Indeed, the combined Monte Carlo/quadrature approach of NS seems to give somewhat of an overall awkwardness to the method. In the next section, we introduce SMC methodology, which we will soon discover allows us to retain the essence of NS, but allay the objections just discussed. 4 Sequential Monte Carlo We begin with an introduction to importance sampling, which is the fundamental idea behind SMC. Recall that, in our setting, , where is a known function. For any probability density such that , it holds that π(φ)=Eπ[φ(X)]=∫Eφ(x)w(x)ν(x)dx/∫Ew(x)ν(x)dx=Eν[φ(X)w(X)]/Eν[w(X)], (12) where is called the weight function. This suggests that one can draw and estimate (12) via the ratio N∑k=1φ(Xk)w(Xk)/N∑k=1w(Xk)=N∑k=1φ(Xk)(w(Xk)/N∑k=1w(Xk)))Wk, where we call the the normalized weights. A common measure of the quality of using with regard to approximating is the effective sample size (ESS), ESS:=Eν[w(X)]2/Eν[w(X)2]. In practice, this can be estimated via (13) see Liu (2001, Chapter 2.5) for a full discussion. Unfortunately, in difficult high-dimensional settings, it is often hard to make a choice of importance sampling density to ensure that the ESS will be high (equivalently, that the variance of the normalized weights will be low). SMC samplers (Del Moral et al, 2006) extend the idea of importance sampling to a general method for sampling from a sequence of probability densities defined on a common space , as well as estimating their associated normalizing constants in a sequential manner. This is accomplished by obtaining at each time step a collection of random samples (called particles) with associated (normalized) weights , for , such that the weighted empirical measures of the cloud of particles, πNt(dx)=N∑k=1WktδXkt(dx),t=1,…,T, (14) converge to their corresponding target measures as . SMC samplers have three main elements: 1. Mutation. For each iteration , the population of particles are moved from to according to a (forward in time) Markov kernel , for which we denote the associated density . This implicitly defines a new importance sampling density at each iteration via the recursive formula νt(x′)=∫Eνt−1(x)Kt(x′|x)dx. (15) 2. Correction. The weights of the particles are updated via an incremental importance weight function , to ensure the particle system is correctly reweighted with respect to the next target density. This update involves multiplying the current weight of each particle by a corresponding incremental weight. 3. Selection. The particles are resampled according to their weights, which are then reset to . A variety of resampling schemes can be used (see for example Doucet and Johansen (2011, Section 3.4). However, the simplest is multinomial resampling. Here, the resampled population contains copies of for each , where . Del Moral et al (2006) show that one can use an arbitrary mutation kernel at each stage, without being required to compute the corresponding importance sampling density at each iteration. This is achieved by introducing an artificial backward (in time) kernel, which transforms the problem into one in the well–understood setting of filtering (for a comprehensive survey, see Doucet and Johansen (2011)). Here, sample paths of the particles’ positions take values on the product space , with the artificial joint distribution admitting each (where is the unnormalized density) as a marginal. SMC samplers can be formulated in many different ways. For our purpose, we require SMC samplers for which for is a -invariant MCMC kernel (or several iterations thereof). This approach is most straightforward and related directly to NS. For this case, a suitable choice of the incremental weight function at time (i.e., one that will ensure the convergence (14)) is ˜wt(xt−1)=γt(xt−1)/γt−1(xt−1). (16) In this setting, the implicit backward kernel will be a good approximation to the optimal backward kernel, provided that and are sufficiently close. SMC samplers give an approximation of at each iteration via πNt(φ):=N∑k=1Wktφ(Xkt). (17) Further to this, at each iteration SMC samplers give estimates of the ratios of normalizing constants ˆZt/Zt−1=N∑k=1Wkt−1˜wt(Xkt−1)πNt(˜wt), and ˆZt/Z1=t∏k=2ˆZk/Zk−1. Somewhat remarkably, the estimators are unbiased, i.e.  . It follows readily that one can also obtain unbiased estimates of if is known, by including the term when appears in the incremental weights. Remark 1 (Adaptivity) Introducing any sort of adaptivity into the SMC algorithm, for example resampling only if some criteria is met, choosing the next distribution online, or setting the parameters of according to the particle population, will not necessarily preserve the unbiasedness or convergence properties of the SMC estimators. The analysis of adaptive SMC methods is technically involved. However, there are consistency results for certain adaptive schemes, see for example Del Moral et al (2012b), Beskos et al (2016), and Cérou and Guyader (2016). Of course, one can always first run the algorithm adaptively, save the values of any adaptively chosen parameters, and then rerun the algorithm a second time and with these fixed. SMC Samplers for Static Models Del Moral et al (2006) provide a strategy for using an SMC sampler to sample from a fixed density by defining a sequence of densities that transition from something that is easy to sample from (for example, the prior density) to . This can be accomplished in a number of ways. We outline the two most common in the SMC literature. One approach (Chopin (2002)) is to define the sequence of target distributions such that at each stage the effect of the likelihood is gradually introduced by considering more data than the last. The second method, first explored by Neal (2001) is called temperature annealing. Here, we have the sequence of densities πt(x)∝ν(x)1−ltπ(x)lt,t=1,…,T. (18) parametrized by some temperature schedule l1=0 where is some initial importance sampling density. In the Bayesian setting, a natural choice is the gradual change from prior to the posterior: πt(x)∝η(x)L(x)lt,t=1,…,T. In practice, it is often difficult to make a good choice for the temperature schedule. This can be achieved (approximately) by choosing the next temperature adaptively online according to the criterion of effective sample size (ESS), as proposed by Jasra et al (2011). This ensures successive distributions are sufficiently close. For some , one can approximately maintain an ESS of between successive distributions by choosing the next temperature online so that a given ESS is maintained. In other words, for a collection of particles, we choose (and thus the next density) so that the ESS for the current importance sampling step is equal to some desired amount. Formally stated, for , we solve Lt+1=infl:Lt via the bisection method, for example. Pseudocode for adaptive temperature–annealed SMC (TA–SMC) is given in Algorithm 2. 5 Nested Sampling via Sequential Monte Carlo The similarity between SMC and NS at this point is evident. Both methods draw from some initial distribution (in our case, the prior distribution), and involve traversing a population of particles through a sequence of distributions, which is of an adaptive nature in NS, but may be either adaptive or fixed in SMC. From the outset, it would seem that nested sampling is some sort of SMC algorithm, yet it is distinct in its use of a quadrature rule. Further, it has an interesting point of difference in that NS does not transition from the prior to the posterior. It turns out, somewhat suprisingly, that this difference is largely a matter of interpretation. Nested Sampling is a special type of adaptive SMC algorithm, where weights are assigned in a suboptimal way. In order to demonstrate this in a straightforward manner, we proceed as follows. We first present a general class of SMC methods called Nested Sampling via Sequential Monte Carlo (NS-SMC) methods. Then, we will proceed to show the correspondence with the original NS method by introducing an adaptive version of NS–SMC, and finally modifying this adaptive version further so that it more closely resembles (and is equivalent as ) to NS. We begin by considering a set threshold schedule, l1=−∞ which in turn parametrizes a sequence of nested sets E1=E⊃E2⊃⋯⊃ET−1⊃ET, via Et:={x∈E:L(x)≥lt},t=1,…,T. Next, define the sequence of constrained densities: ηt(x)=η(x)I{x∈Et}Pη(X∈Et)Pt,t=1,…,T. (21) We now consider directly shells of , via the sets, ˘Et={x∈E:lt Observe that sets form a partition of , for , and that because , we have that . Next, we define a second set of densities, corresponding to constrained versions of to these shells, πt(x)=γ(x)I{x∈˘Et}∫Eγ(x)I{x∈˘Et}dxZt,t=1,…,T. With the above in mind, we define a class of SMC Samplers called NS-SMC samplers, that have the following two properties: 1. Given samples targeting , the importance sampling branches into two paths. One path targets the next constrained prior , while the second targets (and terminates at) the constrained posterior . This branching of importance sampling paths occurs for all but , which proceeds only to . This is illustrated in Figure 1. The importance sampling procedure just described results in (dependent) SMC samplers which output estimators of , as well as samples that can be used to estimate for each . 2. The resulting estimators for both and for are used together to estimate (1) via use of the identity π(φ) =T∑t=1Pπ(X∈˘Et)Eπt[φ(X)]=T∑t=1ZtZπt(φ). (22) For simplicity (and similarity to the original NS method), we consider the case where each is used directly as an importance sampling density for without any further resampling or moving. In such a case, we need only consider an SMC sampler that sequentially targets , because all terms in (22) can be rewritten in terms of expectations with respect to those densities. Thus, NS–SMC can be viewed as an extension to the rare–event SMC (multilevel splitting) method of Cérou et al (2012), which uses density sequences of the form (21) in order to estimate the probability (normalizing constant) . For ease of presentation, below we use shorthand notation analogously to (17). For example, instead of , we write . Noting that , we have π(φ) =T∑t=1ZtZπt(φ)=T∑t=1ZtZηt(LI˘Etφ)ηt(LI˘Et), (23) which is estimated via πN(φ)=T∑t=1ˆZt∑Ts=1ˆZs⋅ηNt(LI˘Etφ)ηNt(LI˘Et). (24) Note that ˆZtηNt(LI˘Etφ)ηNt(LI˘Et)=ˆPtηNt(LI˘Et)ˆZtηNt(LI˘Etφ)ηNt(LI˘Et)=ˆPtηNt(LI˘Etφ)=N∑k=1ˆPtWktL(Xkt)I{Xkt∈˘Et}φ(Xkt). (25) The final equality above implies that reweighting with respect to the (full) posterior requires that each particle targeting at iteration is assigned the weight ˘wkt=ˆPtWktL(Xkt)I{Xkt∈˘Et}. In turn, we have that πN(φ)=T∑t=1N∑k=1˘Wktφ(Xkt),˘Wkt=˘wkt∑Tt=1∑Nk=1˘wkt. (26) is an estimator of . Pseudocode for this version of NS–SMC is given in Algorithm 3. We call this version Fixed NS–SMC (as opposed to adaptive) as one specifies apriori. Note that resampling occurs at each iteration in order to avoid wasting computational effort moving particles with zero weight. The issue of how to appropriately set will be addressed shortly. However, for now we return to the concerns with NS outlined earlier in Section 3, and note how they are addressed by NS–SMC: 1. Assumption of Independent Samples. NS-SMC has no requirement that the samples be independent. Moreover, the unbiasedness and consistency properties of Fixed NS–SMC are established in the appendix via a straightforward application of Feynman–Kac formalism. 2. Effect of Quadrature on Posterior Inferences. All errors in NS–SMC are solely Monte Carlo errors. The analogous error to that of NS in estimating is more natural and occurs as the result of the error in the ratios for . 3. Parallelization. NS-SMC is easily parellizable without any further modification. After resampling, the move step, which is often the most computationally intensive, can be parallelized at the particle level. 4. Truncation Error. NS-SMC commits no truncation error as the final density accounts for the interval which is omitted from the NS quadrature. However, it is important to note that the choice of the final threshold will still have an effect on the variance of NS–SMC. Nevertheless, the absence of truncation error is a key factor in allowing NS–SMC to obtain unbiased estimates of . Generally one does not have a good idea of a choice of that will perform well. In a similar manner to adaptive TA–SMC, at each iteration in Algorithm 3 we can replace with a random threshold that is chosen adaptively. Ensuring an estimated ESS for that is at least simply reduces to choosing to be the quantile of the values . Such a choice in NS–SMC results in the online specification of both and . While is analogous to , we use this notation as it is common in adaptive multilevel splitting algorithms (Botev and Kroese, 2008, 2012; Cérou et al, 2012), where is interpreted as the proportion of particles that one desires to lie above each successive adaptively chosen threshold. For NS–SMC, can be interpreted as the desired proportion of samples with non–zero weight for . As with NS, ANS–SMC also requires that the iteration at which termination occurs is determined online in some manner. The termination criterion/procedure we use compares the evidence estimate after an iteration with an estimate that would be obtained by instead terminating at that iteration. At each iteration, after computing , we compare the ratio of the two evidence estimates, and if it is greater than , we instead set , and declare . In our examples, we found that the choice was suitable. Remark 2 For a given adaptive choice of the next threshold , experiments indicate that there is considerably less bias (particularly for small ) in the estimates of if one sets and instead of and . 5.2 Improved NS The purpose of this section is to demonstrate that NS–SMC is not simply the Nested Sampling idea with an SMC twist, but that the original NS algorithm with MCMC is a variant of NS–SMC (with a different, yet suboptimal choice of weights). We follow the original NS sampling scheme more closely and derive an SMC estimator using a similar two-branched importance sampling scheme as illustrated in Figure 1. Specifically, we choose the sequence of distributions adaptively so only one particle lies below the next threshold and conduct our move step in a similar manner to NS. Just as Algorithm 3 can be viewed as an extension to the rare-event SMC algorithm of Cérou et al (2012), the more direct variant of NS we describe here can be viewed as an extension of the static Last Particle Method (LPM) for rare-event simulation of Guyader et al (2011). Unfortunately, there is a lack of theoretical results for the LPM in the setting where MCMC is used (due mainly to the special type of move step, outlined shortly). We call this method Improved Nested Sampling (INS). The sampling scheme is identical for NS and INS, and thus one can obtain both estimates from the same nested sampling run. Somewhat surprisingly, provided the filling in procedure is used, the NS and INS estimators of model evidence and posterior quantities also become identical as . This provides insight into why NS seems to perform well in practice despite a violation of the independence assumption that underlies its quadrature. INS is a modified version of ANS–SMC with the following differences: 1. We enforce for iterations that a single particle has non-zero incremental weight for . That is, like NS, we have only one particle that does not have support on the next constrained version of . 2. We conduct the resampling and mutation step in a manner that ensures that MCMC is only required to replenish the “worst–performing particle”. Unfortunately, setting alone in ANS–SMC does not always ensure the first property above, which requires that all particles correspond to a unique value of . In discrete settings it is common for some particles to have the same value of . However, even if for all , there may still be duplicate particles if there is a non–zero probability that the MCMC kernel will return the same point (as is the case in Metropolis–Hastings MCMC). The solution is reasonably straightforward. We employ auxiliary variables in a similar manner to (Murray, 2007, pgs. 96–98), who proposes a variant of NS that can be applied to discrete spaces. A similar approach is used in Cérou and Guyader (2016) to break ties in the theoretical analysis of adaptive multilevel splitting. For brevity, we assume the aforementioned condition that for all , which is typically the case for continuous . However, this condition excludes certain cases of what Skilling (2006) refers to as “degenerate likelihoods”. Under this assumption, the approach about to be described is entirely implicit if one does not consider any auxiliary variables, ignores any duplicate particles, and just moves a single particle at each iteration, yielding the same for multiple iterations. However, in discrete cases, one must consider the extended space explicity and conduct the move step differently, see Murray et al (2005) and Murray (2007). We extend the space from to via a uniformly distributed variable . That is, we have η(x,u)=η(x)I{0 In this setting, define the augmented threshold schedule: (l1,v1)=(−∞,0)<(l2,v2)<⋯<(lT,vT)<(lT+1,vT+1)=(∞,1), where is to be understood as either , or that both and . Applying a similar derivation of NS–SMC to that given earlier in this section, we have the sets Et={(x,u)∈E×(0,1):(L(x)>lt,0 and the densities ηt(x,u)∝η(x,u)I{(x,u)∈Et}, and πt(x,u)∝π(x,u)I{(x,u)∈˘Et}, % for t=1,…,T. Note that this setup ensures that the sets are nested and that the sets form a partition. Prior to demonstrating how INS relates to NS, we stress the following. Due to the special type of mutation step, the algorithm falls outside of the standard SMC sampler framework (which requires the same forward kernel for all particles). Despite this, we continue to use incremental weight functions of the form (16). While this choice seems to be a natural one (and matches the approach used in the LPM), it implicitly assumes that the INS procedure produces a population of particles that are marginally distributed according to (the adaptively chosen) at each time step. This may only hold approximately in practice, and even establishing that the property holds as is difficult due to the complicated combination of adaptively chosen distributions and non–standard mutation step. Nevertheless, we present the method for the purpose of making clear the connection of NS with the NS–SMC framework. Moreover, we point out that while our assumption on the marginal distribution of the particles at each iteration is a strong one, it is substantially weaker than that required in the original formulation of NS, which assumes not only the same condition on the marginal distributions of the particles, but also that the particles are independent. Recall that both of these conditions are required for the property (8) to hold. With the above in mind, we sketch the key aspects of the INS below. Adaptive Choice of Densities. At each (non–final) iteration, we determine and adaptively (via the choice of of the next threshold parameters and ) as follows. First, we set . Next, denote the indices of the particles satisfying by . We “break ties” by choosing . Reweighting. Importance sampling takes place for and with the incremental weight functions and , respectively. By construction, we will have samples with non-zero incremental weight for , giving ˆPt=(N−1N)t−1ˆPt−1N−1NηNt−1(IEt)=(N−1N)t. Similarly, only one particle, denoted , will have non–zero incremental weight for (and thus non–zero weight for ), so we have ˆZt−1=(N−1N)t−1ˆPt−11NL(˘Xt−1)ηNt−1(LI˘Et−1). (28) Note that (28) is not only but also precisely the weight of with respect to as in (25). Recall that this is also the case with NS. Resampling and Mutation. We resample according to a residual scheme, and reset all weights to . As we have samples with equal (non-zero) weight, residual resampling will result in unique particle positions , as well as a final particle that is a copy of one of the others. The mutation step is as follows. We begin by applying the identity map all particles except the –th one, moving to . Then, we perform the following two –invariant moves in sequence to move to . First, we move to by applying some fixed number of iterations of an –invariant MCMC kernel. Note that ηt(x|u)∝{η(x)I{L(x)≥Lt}u>Vtη(x)I{L(x)>Lt}u≤Vt, so this is simply sampling from a constrained version of as in standard NS or NS–SMC. Next, we draw from a new position according to ηt(u|x)∝{I{0LtI{Vt Final Iteration. The reweighting and mutation steps continue up until a termination criteria is satisfied. At this point, we declare , and set the final threshold parameters , . Here, all samples will have non–zero incremental weight for , and we have ˆZT=(N−1N)T1NN∑k=1L(XkT). (29) Note that the above normalizing constant estimator bears similarity to the “filling in” heuristic in NS. However, here it arises naturally as a final step, and uses to estimate , as opposed to . Despite this similarity, it still appears that (28) is distinct from its analogous term in NS. However, by means of some simple algebraic manipulation, we obtain the identity which, after rearrangement, reveals that (N−1N)t−11N=(N−1N)t−1−(N−1N)t, (30) resembling precisely the Riemann sum quadrature rule, with the choice . The most notable aspect of this is that at no stage in the derivation of INS did we require the property given in (8), which would require samples to be independent. We give this version of NS with the “improved” moniker as the alternative choices for have been found to yield superior estimators of when compared to those proposed originally by Skilling (2006). Guyader et al (2011) show (under the same idealized sampling assumption as NS), that the LPM estimators are unbiased estimators of . Further to this, Walter (2017, Remark 1) shows that this estimator results in superior estimates over in terms of variance so long as . In light of this, Walter suggests using Riemann sum quadrature using these alternative values for as it will result in a superior NS estimator. The final piece of the puzzle connecting NS with INS and the overall NS–SMC framework, is that as we have that , so NS’s weights become equal to those of INS. We give a simple illustration of this convergence in Figure 2, where we plot the ratio of (30) to the standard NS Riemann sum / trapezoidal rule terms after iterations of NS for different (NS gives identical estimates for for this choice, regardless of ). The convergence will be slower for larger . This provides some insight as to why NS seems to deliver correct results in practice, even when particles are far from independent (as will soon be demonstrated numerically). In essence, NS is an adaptive SMC algorithm on an extended space, where the choice of weights are, in a sense, sub–optimal. 5.3 Phase Transition Example In order to compare the different variants of NS–SMC and NS, we shall use a phase transition example. Nested sampling is robust to models that contain phase transitions, i.e., models for which the graph of against
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375150799751282, "perplexity": 724.736462573883}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00393.warc.gz"}
https://btkinhibitor.com/2019/06/18/15892/
# Ce (but, e.g., see Ovaskainen et al. 2010; Steele et al. 2011), hence limiting our Ce (but, e.g., see Ovaskainen et al. 2010; Steele et al. 2011), hence limiting our understanding of species interaction and association networks. Within this study, we present a new technique for examining and visualizing several pairwise associations inside diverse assemblages. Our approach goes beyond examining the identity of species or the presence of associations in an assemblage by identifying the sign and quantifying the strength of associations in between species. Furthermore, it establishes the path of associations, within the sense of which individual species tends to predict the presence of a further. This extra information enables assessments of mechanisms providing rise to observed patterns of cooccurrence, which numerous authors have recommended is usually a important information gap (reviewed by Bascompte 2010). We demonstrate the worth of our strategy making use of a case study of bird assemblages in Australian temperate woodlands. This can be one of the most heavily modified ecosystems worldwide, where understanding adjustments in assemblage composition PubMed ID:http://www.ncbi.nlm.nih.gov/pubmed/21343449 is of important interest (Lindenmayer et al. 2010). We use an extensive longitudinal dataset gathered from greater than a decade of repeated GNE-495 custom synthesis surveys of birds on 199 patches of remnant native woodland (remnants) and of revegetated woodland (plantings). To demonstrate the value of our method, we first assess the co-occurrence patterns of species in remnants and after that contrast these together with the patterns in plantings. Our new approach has wide applications for quantifying species associations inside an assemblage, examining queries connected to why certain species take place with other individuals, and how their associations can identify the structure and composition of whole assemblages.of how helpful the second species is as an indicator of the presence in the 1st (or as an indicator of absence, in the event the odds ratio is 1). An odds ratio is extra appropriate than either a probability ratio or distinction since it takes account of the restricted array of percentages (0100 ): any given value of an odds ratio approximates to a multiplicative effect on uncommon percentages of presence, and equally on uncommon percentages of absence, and can not give invalid percentages when applied to any baseline worth. In addition, such an application to a baseline percentage is straightforward, providing a readily interpretable impact in terms of modify in percentage presence. This pair of odds ratios can also be extra proper for our purposes than a single odds ratio, calculated as above for either species as 1st but with all the denominator being the odds on the first species occurring when the second does not. That ratio is symmetric (it offers the identical outcome whichever species is taken 1st) and doesn’t take account of how popular or rare each and every species is (see under) and hence the possible usefulness of one species as a predictor from the other. For the illustrative example in Table 1, our odds ratio for indication of Species A by Species B is (155)(5050) = three and of B by A is (1535)(20 80) = 1.71. These correspond to an increase in presence from 50 to 75 for Species A, if Species B is identified to happen, but only a rise from 20 to 30 for Species B if Species A is identified to take place. The symmetric odds ratio is (155)(3545) = (1535)(545) = three.86, which provides the exact same value to both of those increases. For the purposes of this study, we interpret an odds ratio greater than 3 or less than as indicating an ecologically “substantial” association. This really is inevitably an arb.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162555694580078, "perplexity": 1473.6867425928763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00447.warc.gz"}
https://tex.stackexchange.com/questions/8419/verbatim-useable-with-a-newenvironment-definition?noredirect=1
# verbatim useable with a newenvironment definition? I'm trying to create a simple newenvironment allowing me to display source code in a verbatim text block with a caption, numbering etc. The solution I'm after is with the in-built LaTeX verbatim environment. I would have used the fancyvrb package, that seems to allow that kind of thing, but I can't update my MiKTeX installation due to proxy authentication issues. Instead I'm after a way to do it with just the built in verbatim environment. When I get home, I can adapt it to use fancyvrb (so I am still interested in answers relating to that), but for now, I'm after a temporary solution to get me through the day. ;^) Here's what I imagine the newenvironment command would be like: \usepackage{float} \floatstyle{boxed} \newfloat{program}{h}{lop} \floatname{program}{Example} \newenvironment{example}[2]{% \begin{program}% \label{#1}% \caption{#2}% \scriptsize \begin{verbatim}% }{% \end{verbatim}% \end{program}% } It fails later-on, complaining about overfull \hboxes, which I assume means that it's still trying to typeset the rest of my document as though it were in verbatim mode. Can I use verbatim env in this way? Got a workaround (given that my MiKTeX package repo is pretty small)? This works for me. \documentclass{article} \usepackage{verbatim} \usepackage{float} \floatstyle{boxed} \newfloat{program}{h}{lop} \floatname{program}{Example} \newbox\examplebox \newenvironment{example}[2]{% \program \caption{#2}% \label{#1}% \verbatim }{% \endverbatim \vskip-.7\baselineskip \endprogram } \begin{document} \begin{example}{foo}{caption} foo bar baz \end{example} \end{document} Note that you need to have the \label after the \caption. The \verbatim macro as reimplemented by the verbatim package seems to be looking for \end{foo} where foo is the name of the current environment. This is why it uses \program, \verbatim, \endverbatim, and \endprogram. The \vskip isn't necessary, but you end up with extra space without it. • Could somebody tell me what this line does? \newbox\examplebox and where are these commands defined? – Matthias Nov 21 '11 at 15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583967924118042, "perplexity": 1859.0988760617247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00196.warc.gz"}
https://au.mathworks.com/help/physmod/hydro/ug/energy-flow-calculations.html
Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English version of the page. Note: This page has been translated by MathWorks. Click here to see To view all translated materials including this page, select Country from the country navigator on the bottom of this page. ## Energy Flows in Thermal Liquid Networks ### Upwind Energy Scheme Thermal energy advection should always match the direction of flow. To satisfy this condition, the Thermal Liquid domain employs a modeling framework known as the upwind scheme. This scheme works by matching the specific internal energy at a port to that just upwind of the port: • If the flow at a port is directed into the component, the specific internal energy at that port is equal to that carried by the incoming fluid. In the figure, the specific internal energy at port A of component 2 is equal to that at port B of component 1. • If the fluid flow at a port is directed out of the component, the specific internal energy at that port is equal to that in the component’s internal fluid volume. In the figure, the specific internal energy at port A of component 2 is equal to that in the internal fluid volume of the same component. The upwind scheme describes the advection thermal energy flow rate using a conditional expression that, for port A of some component, reads: `${\Phi }_{A}^{Ad}=\left\{\begin{array}{cc}{\stackrel{˙}{m}}_{A}{u}_{A},& {\stackrel{˙}{m}}_{A}>0\\ {\stackrel{˙}{m}}_{A}{u}_{I},& {\stackrel{˙}{m}}_{A}\le 0\end{array},$` where: • ΦAAd is the thermal energy flow rate due to advection through port A. • ${\stackrel{˙}{m}}_{A}$ is the mass flow rate through port A. • uA is the specific internal energy upwind of port A. • uI is the specific internal energy in the internal fluid volume. Without numerical smoothing, this expression introduces a number of computational challenges. It adds a slope discontinuity to the thermal energy flux at zero mass flow rates that makes the model more prone to simulation errors. It also adds a jump discontinuity to the specific internal energy and makes its value ill-defined during flow reversals. Upwind Energy Scheme without Smoothing ### Numerical Smoothing The Thermal Liquid domain smoothes the numerical discontinuities introduced by the upwind energy scheme by adding to the thermal energy flow rate a thermal conduction term. At port A of some component: `${\Phi }_{A}^{Th}=\left\{\begin{array}{cc}{\stackrel{˙}{m}}_{A}{u}_{A}+G\left({u}_{A}-{u}_{I}\right),& {\stackrel{˙}{m}}_{A}>0\\ {\stackrel{˙}{m}}_{A}{u}_{I}+G\left({u}_{A}-{u}_{I}\right),& {\stackrel{˙}{m}}_{A}\le 0\end{array},$` where: • ΦATh is the thermal energy flow rate due to advection through port A and conduction between port A and the internal fluid volume. • G is a thermal conductance coefficient computed from the fluid properties and component geometry. Rewriting the modified expression in a more convenient form: `${\Phi }_{A}^{Th}=\left\{\begin{array}{cc}\left({\stackrel{˙}{m}}_{A}+G\right){u}_{A}-G{u}_{I},& {\stackrel{˙}{m}}_{A}>0\\ G{u}_{A}+\left({\stackrel{˙}{m}}_{A}-G\right){u}_{I},& {\stackrel{˙}{m}}_{A}\le 0\end{array}$` Using `min` and `max` functions to collapse the conditional expression: `${\Phi }_{A}^{Th}=\left[\mathrm{max}\left({\stackrel{˙}{m}}_{A},0\right)+G\right]{u}_{A}+\left[\mathrm{min}\left({\stackrel{˙}{m}}_{A},0\right)-G\right]{u}_{I}$` Approximating the collapsed expression as a numerically smooth expression: `${\Phi }_{A}^{Th}\approx \frac{{\stackrel{˙}{m}}_{A}+\sqrt{{\stackrel{˙}{m}}_{A}^{2}+4{G}^{2}}}{2}{u}_{A}+\frac{{\stackrel{˙}{m}}_{A}-\sqrt{{\stackrel{˙}{m}}_{A}^{2}+4{G}^{2}}}{2}{u}_{I}$` The end expression removes the slope discontinuity from the thermal energy flow rate curve and the jump discontinuity from the specific internal energy curve. The thermal conductance coefficient determines the amount of smoothing applied in both cases: `$G=\frac{kS}{{c}_{v}L},$` where: • k is the thermal conductivity defined in the Thermal Liquid Settings (TL) block. • S is the flow cross-sectional area at the port considered. • cv is the specific heat defined in the Thermal Liquid Settings (TL) block. • L is a characteristic distance between the port and the component’s fluid volume. Upwind Energy Scheme with Smoothing ### Total Energy Flow Rate The Through variable of the Thermal Liquid domain is the total energy flow rate. This variable is defined in terms of the smoothed upwind thermal energy flow rate as: `${\Phi }_{A}={\Phi }_{A}^{Th}+\frac{p}{\rho },$` where: • ΦA is the total energy flow rate through port A. • ΦATh is the thermal energy flow rate through port A computed from the smoothed upwind energy scheme. • p is the absolute pressure at port A. • ρ is the fluid density at port A. The kinetic energy contribution to the total energy flow rate is assumed negligible in Thermal Liquid networks and is ignored in this domain. Get trial now
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898308515548706, "perplexity": 1289.3973665998715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00200.warc.gz"}
http://math.stackexchange.com/questions/18420/double-limit-of-a-series-summation?answertab=oldest
double limit of a series summation There is a sequence $a_n$ and $a_n \to 0$ as $n \to \infty$. Let $S_N = \sum_{n=0}^{N} a_n$. Let the sequence $S_N$ be converging to a limit $l$ where $l \in \mathbb{R}$. What is $\lim_{K\to\infty}\lim_{N\to\infty}\sum_{n=K}^{N} a_n$ ? - $l = \lim_{N\to\infty}\sum_{n=0}^{N} a_n$. Therefore $\lim_{N\to\infty}\sum_{n=K}^{N} a_n = l - \sum_{n=0}^{K-1} a_n = l - S_{K-1}$. But since the sequence converges to $l$, $\lim_{K\to\infty} |S_{K} - l| = 0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999840259552002, "perplexity": 122.09378415320327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00231-ip-10-234-18-248.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/229543/mean-of-inverse-exponential-distribution?noredirect=1
# Mean of inverse exponential distribution Given a random variable $Y = Exp(\lambda)$, what is the mean and variance of $G=\dfrac{1}{Y}$ ? I look at the Inverse Gamma Distribution, but the mean and variance are only defined for $\alpha>1$ and $\alpha>2$ respectively... Given that the inverse exponential distribution has $\alpha = 1$, you have stumbled upon the fact that the mean of the inverse exponential is $\infty$. And therefore, the variance of the inverse exponential is undefined. If $G$ is inverse exponentially distributed, $E(G^r)$ exists and is finite for $r < 1$, and $= \infty$ for $r = 1$. • This is linked with my question in here – Diogo Santos Aug 14 '16 at 11:23 I'll show the calculation for the mean of an Exponential distribution so it will recall you the approach. Then, I'll go for the inverse Exponential with the same approach. Given $f_Y(y) = \lambda e^{-\lambda y}$ $E[Y] = \int_0^\infty{yf_Y(y) dy}$ $= \int_0^\infty{y \lambda e^{-\lambda y} dy}$ $= \lambda \int_0^\infty{y e^{-\lambda y} dy}$ Integrating by part (ignore the $\lambda$ in front of the integral for the moment), $u = y, dv=e^{-\lambda y} dy$ $du = dy, v = \frac{-1}{\lambda}e^{-\lambda y}$ $= y \frac{-1}{\lambda}e^{-\lambda y} - \int_0^\infty{ \frac{-1}{\lambda}e^{-\lambda y} dy}$ $= y \frac{-1}{\lambda}e^{-\lambda y} + \frac{1}{\lambda} \int_0^\infty{ e^{-\lambda y} dy}$ $= y \frac{-1}{\lambda}e^{-\lambda y} - \frac{1}{\lambda^2} e^{-\lambda y}$ Multiply by the $\lambda$ in front of the integral, $= - y e^{-\lambda y} - \frac{1}{\lambda} e^{-\lambda y}$ Evaluate for $0$ and $\infty$, $= (0 - 0) - \frac{1}{\lambda} (0 - 1)$ $= \lambda^{-1}$ Which is a known results. For $G = \frac{1}{Y}$, the same logic apply. $E[G] = E[\frac{1}{Y}]= \int_0^\infty{\frac{1}{y} f_Y(y) dy}$ $= \int_0^\infty{\frac{1}{y} \lambda e^{-\lambda y} dy}$ $= \lambda \int_0^\infty{\frac{1}{y} e^{-\lambda y} dy}$ The main difference is that for an integration by parts, $u = y^{-1}$ and $du = -1y^{-2}$ so it doesn't help us for $G = \frac{1}{y}$. I think the integral is undefined here. Wolfram alpha tell me it doesn't converge. http://www.wolframalpha.com/input/?i=integrate+from+0+to+infinity+(1%2Fx)+exp(-x)+dx So the mean doesn't exist for the inverse Exponential, or, equivalently, for the inverse Gamma with $\alpha=1$. The reason is similar for the variance and $\alpha \gt 2$. • Note that (as Whuber commented on another answer) $\exp(-\lambda y)$ is bounded away from $0$ for $y$ near $0$, and $\int_0^\epsilon \frac{1}{y}\;dy$ diverges for any $\epsilon > 0$, so the integral for $E[G]$ does indeed diverge. – Strants Aug 12 '16 at 16:05 After a quick simulation (in R), it seems that the mean does not exist : n<-1000 rates <- c(1,0.5,2,10) par(mfrow = c(2,2)) for(rate in rates) { plot(cumsum(1/rexp(n, rate))/seq(1,n),type='l',main = paste0("Rate = ",rate), xlab = "Sample size", ylab = "Empirical Mean") } For the sake of comparison, here is what happens with a genuine exponential random variable. • The mean cannot exist because the exponential has positive density in any neighborhood of zero. – whuber Aug 12 '16 at 15:45 • @whuber indeed, this is what I tried to stress : the empirical mean does not converge for the inverse of an exponential law, whereas it does for an exponential law. – RUser4512 Aug 12 '16 at 15:51 • Yes, but (1) from the fact I quoted, the conclusion of no expectation is immediately obvious and (2) no amount of simulation can do any more than suggest that an expectation might be undefined. For instance, if one were to truncate the exponential at a lower limit of $10^{-1000}$, its inverse would indeed have a finite expectation, but your simulations would not look any different. Therefore the simple observation (1) would appear to be much more informative and reliable than the simulations. – whuber Aug 12 '16 at 15:56 • – kjetil b halvorsen Oct 9 '19 at 7:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822701811790466, "perplexity": 435.2952049101111}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00215.warc.gz"}
http://nrich.maths.org/5585/index
### Arrow Arithmetic 1 The first part of an investigation into how to represent numbers using geometric transformations that ultimately leads us to discover numbers not on the number line. ### Arrow Arithmetic 3 How can you use twizzles to multiply and divide? ### Twizzle Arithmetic Arrow arithmetic, but with a twist. # Arrow Arithmetic 2 ##### Stage: 4 Challenge Level: This problem follows on from Arrow Arithmetic 1 We're trying to find a way of drawing numbers that will also help us draw not just the number, but also make sense of number operations such as addition and multplication. In Arrow Arithmetic 1, we tried using a single arrow to represent a number. This has some features we desire since we can draw long arrows to represent big numbers and short arrows for small numbers. The direction of the arrow should allow us to distinguish positive from negative numbers. But as we saw, there is a problem with using a single arrow, which is captured in this question: 'How long does an arrow need to be to represent the number 1, and which way should it point? ' Once we answer this question by choosing a unit arrow, it becomes very easy to represent other numbers with arrows. 2 is an arrow twice as long as the unit arrow, pointing the same way. -3 is an arrow three times as long as the unit arrow, pointing in the opposite direction. These thoughts led me to the idea of using two arrows to picture a number. I'm going to call this pair of arrows a twizzle. The thicker arrow in the twizzle is the unit arrow - which we can choose to be any length we like and to point in any direction. The other arrow in the twizzle is the number arrow. It determines the value of the twizzle by comparison to the unit arrow. Play with this animation until you understand how it works, and how the twizzle respresents a number. Check with the hints to ensure you have discovered all the ways to manipulate twizzles. Full Screen Version This text is usually replaced by the Flash movie. In the next animation I am combining the blue and green twizzles together to make a grey twizzle. I now have two questions for you: How is the value of the grey twizzle related to the value of the blue and green twizzles? How would you construct the grey twizzle from the blue and green twizzles using rotations, translations, and enlargements? Full Screen Version This text is usually replaced by the Flash movie. How would the value of a twizzle change if you rotated its number arrow through 180 degrees? Explain how you could use this fact to construct twizzle subtraction. Full Screen Version This text is usually replaced by the Flash movie.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501412868499756, "perplexity": 799.553060584639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096870.66/warc/CC-MAIN-20150627031816-00309-ip-10-179-60-89.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/12112/tikz-option-for-command-or-environment?answertab=active
# TikZ option for command or environment I am learning TikZ. I am wondering where I can find information about options of command or environment. For example, for \draw, you can use \draw[step=0.5], but it also has \draw[xstep=0.5,ystep=0.7]. For \begin{tikzpicture}[scale=3], it also has many other options. I am reading the manual pgfmanual.pdf but don't finish reading yet, but it seems that it doesn't introduce all options for a command or environment. - This question doesn't really have a clearly defined answer. If you are learning PGF/TikZ, perusing the examples on TeXample may be very enlightening to you, as it was to me. –  Richard Terrett Feb 26 '11 at 10:27 With the pgfmanual it sometimes hard to know which options are limited to which commands. You can use most options with most commands/environments without getting an error but sometimes they don't do anything at the wrong location. But I don't think there is a better listing of options around. –  Martin Scharrer Feb 26 '11 at 11:48 @eutactic @Martin for a newbie, the problem is that i know very few options for a command except that the option is mentioned in pgfmanual for that command. referring to other's example seems the only useful way. –  warem Feb 26 '11 at 14:08 The TikZ manual a large beast, with a huge amount of information. You have to know where to look. It seems that what you want is mostly Part III "TikZ ist kein Zeichenprogramm", which describes all the tikz commands and for each of them, it lists the basic options, with explanations. It is divided into sections, for example there is a section on drawing paths, that describes the \draw command, and it lists and explains the main options that apply to the path. yes pgfmanual is a large beast. i just looked up it hastily and didn't care about the chapter "TikZ ist kein Zeichenprogramm", what's more, it is german. it didn't bring my attention. but its content is english. thank you. –  warem Feb 28 '11 at 2:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388266205787659, "perplexity": 1363.5933024636527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00175-ip-10-16-133-185.ec2.internal.warc.gz"}
http://psychology.wikia.com/wiki/Cumulative_distribution_function
# Cumulative distribution function 34,200pages on this wiki In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by $F(x) = \operatorname{P}(X\leq x),$ where the right-hand side represents the probability that the random variable X takes on a value less than or equal to x. The probability that X lies in the interval (ab] is therefore F(b) − F(a) if a < b. It is conventional to use a capital F for a cumulative distribution function, in contrast to the lower-case f used for probability density functions and probability mass functions. Note that in the definition above, the "less or equal" sign, '≤' could be replaced with "strictly less" '<'. This would yield a different function, but either of the two functions can be readily derived from the other. The only thing to remember is to stick to either definition as mixing them will lead to incorrect results. In English-speaking countries the convention that uses the weak inequality (≤) rather than the strict inequality (<) is nearly always used. The "point probability" that X is exactly b can be found as $\operatorname{P}(X=b) = F(b) - \lim_{x \to b^{-}} F(x)$ ## Complementary cumulative distribution functionEdit Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (CCDF), defined as $F_c(x) = \operatorname{P}(X > x) = 1 - F(x)$. ## Examples Edit As an example, suppose X is uniformly distributed on the unit interval [0, 1]. Then the cdf is given by F(x) = 0, if x < 0; F(x) = x, if 0 ≤ x ≤ 1; F(x) = 1, if x > 1. For a different example, suppose X takes only the values 0 and 1, with equal probability. Then the cdf is given by F(x) = 0, if x < 0; F(x) = 1/2, if 0 ≤ x < 1; F(x) = 1, if x ≥ 1. ## Properties Edit Every cumulative distribution function F is (not necessarily strictly) monotone increasing and continuous from the right (right-continuous). Furthermore, we have $\lim_{x\to -\infty}F(x)=0$ and $\lim_{x\to +\infty}F(x)=1$. Every function with these four properties is a cdf. Almost all cdfs are cadlag functions. If X is a discrete random variable, then it attains values x1, x2, ... with probability pi = p(xi), and the cdf of X will be discontinuous at the points xi and constant in between: $F(x) = \operatorname{P}(X\leq x) = \sum_{x_i \leq x} \operatorname{P}(X = x_i) = \sum_{x_i \leq x} p(x_i)$ If the cdf F of X is continuous, then X is a continuous random variable; if furthermore F is absolutely continuous, then there exists a Lebesgue-integrable function f(x) such that $F(b)-F(a) = \operatorname{P}(a\leq X\leq b) = \int_a^b f(x)\,dx$ for all real numbers a and b. (The first of the two equalities displayed above would not be correct in general if we had not said that the distribution is continuous. Continuity of the distribution implies that P(X = a) = P(X = b) = 0, so the difference between "<" and "≤" ceases to be important in this context.) The function f is equal to the derivative of F almost everywhere, and it is called the probability density function of the distribution of X. The Kolmogorov-Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test (pronounced /kœypəʁ/; a bit like "Cowper" might be pronounced in English) is useful if the domain of the distribution is cyclic as in day of the week. For instance we might use Kuiper's test to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427137970924377, "perplexity": 238.0614183273606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274324.89/warc/CC-MAIN-20160524002114-00110-ip-10-185-217-139.ec2.internal.warc.gz"}
http://math.stackexchange.com/help/badges/125/proofreader
Approve or reject 100 suggested edits. Awarded 519 times Awarded yesterday to Awarded yesterday to Awarded Apr 27 at 9:54 to Awarded Apr 25 at 17:56 to Awarded Apr 25 at 15:09 to Awarded Apr 20 at 11:01 to Awarded Apr 18 at 9:40 to Awarded Apr 17 at 23:11 to Awarded Apr 16 at 19:42 to Awarded Apr 13 at 5:11 to Awarded Apr 12 at 7:13 to Awarded Apr 5 at 11:16 to Awarded Apr 5 at 7:18 to Awarded Apr 1 at 7:12 to Awarded Mar 30 at 22:49 to Awarded Mar 30 at 8:48 to Awarded Mar 27 at 14:38 to Awarded Mar 26 at 20:17 to Awarded Mar 25 at 11:11 to Awarded Mar 22 at 22:13 to Awarded Mar 15 at 20:41 to Awarded Mar 13 at 18:01 to Awarded Mar 9 at 5:25 to Awarded Mar 7 at 18:07 to Awarded Mar 7 at 7:20 to Awarded Mar 4 at 17:21 to Awarded Feb 28 at 20:29 to Awarded Feb 25 at 18:57 to Awarded Feb 18 at 4:56 to Awarded Feb 17 at 21:14 to Awarded Feb 17 at 16:25 to Awarded Feb 17 at 12:19 to Awarded Feb 17 at 11:52 to Awarded Feb 17 at 9:06 to Awarded Feb 13 at 17:50 to Awarded Feb 12 at 6:08 to Awarded Feb 12 at 3:26 to Awarded Feb 7 at 0:11 to Awarded Feb 6 at 11:02 to Awarded Feb 5 at 18:37 to Awarded Feb 2 at 20:04 to Awarded Jan 30 at 14:05 to Awarded Jan 17 at 19:25 to Awarded Jan 15 at 14:29 to Awarded Jan 14 at 13:38 to Awarded Jan 11 at 16:39 to Awarded Jan 6 at 7:29 to Awarded Jan 2 at 16:07 to Awarded Dec 30 '15 at 8:42 to Awarded Dec 29 '15 at 22:10 to Awarded Dec 22 '15 at 17:26 to Awarded Dec 21 '15 at 11:00 to Awarded Dec 19 '15 at 16:30 to Awarded Dec 13 '15 at 15:31 to Awarded Dec 13 '15 at 11:13 to Awarded Dec 9 '15 at 4:33 to Awarded Dec 9 '15 at 2:55 to Awarded Dec 8 '15 at 4:30 to Awarded Dec 5 '15 at 11:32 to Awarded Dec 3 '15 at 22:31 to
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840165734291077, "perplexity": 3231.7394978744387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112727.96/warc/CC-MAIN-20160428161512-00157-ip-10-239-7-51.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/140202/how-to-read-a-chromatography-calibration-curve
# How to read a chromatography calibration curve? I am a high school student who has been studying a bit of chromatography. I came across calibration curve during class and I trying to figure how to read it and use it? I have no clue how to read off the graph nor do I really now the use of it. Can someone please explain what information I can extract from this graph and possibly provide an example if that's fine? I assume your teacher explained the HPLC separation. If injected a mixture of four ions you will get four peaks. Each peak has an area, which is proportional to the concentration of the substance. Imagine you wanted to determine the concentration of chloride ions in your tap water. What you would do is that you will prepare several known concentrations of chloride ions using $$\ce{NaCl}$$ solutions, say $$0$$ to $$\pu{350 ppm}$$ (look at your calibration chart). Inject them one by one, and measure the peak area, let us say the second peak is that of chloride ion. The peak shape is Gaussian in real work but you can "estimate" them as triangles. Recall triangle's area is very easy to calculate. First step: Plot the peak area for concentration. What you get is a calibration curve. The next step is to find a mathematical equation that fits this data. Fortunately it is linear in your case of the form $$y=mC$$ where $$y= \text{peak area}$$, $$m=\text{slope}$$, $$C=\text{concentration}$$ in $$\pu{ppm}$$. Now you would inject your tap water in the HPLC, you do not know the concentration ($$C_1$$) but you do know the peak area. The last step is called interpolation. You go from the known area, and read the corresponding concentration axis. Viola! Now you know the chloride concentration in your tap water! Alternatively, you use the equation, $$y=0.1312C$$, so I know the area of the tap water chloride injection, say, it was $$\pu{40.0 units}$$, so $$C_1 = 40/0.1312 = \pu{304.9 ppm}$$. In typical tap water it is around $$\pu{100 ppm}$$, I just made up those numbers. Bonus: Rationalize the units of the peak area on the $$y$$-axis! Why it is conductivity times minutes? • Thank you so much for the help. I'm finally able to get a firmer grasp on this topic! Thank you again for the thorough explanation! – Munchies Sep 15 at 13:05 • @Munchies, did you understand units of y-axis? – M. Farooq Sep 15 at 14:19 • Yes, the y-axis indicates the area of peak, which is proportionate to the concentration of the substance right? Also, thanks for the help, it's really cleared things up for me! – Munchies Sep 15 at 15:38 Suppose it is a calibration curve of high performance liquid chromatography (HPLC) to determine concentration of chloride ion in unknown solutions. The detector is sensitive to charge ions, which gave the peaks, area of which is proportional to concentration of ions. I assume each peak area is highly specific for ion in interest. First you make a calibration curve using standard $$\ce{Cl-}$$ solutions with known concentrations: The curve is a straight line, which goes through $$(0,0)$$ origin ($$y = 0.1312x$$) with an excellent correlation with $$R^2 = 0.9995$$ ($$R^2 = 1.0$$ is the best correlation). I insert an example of chromatography trace to explain how concentration of ion is related to area under the relevant peak. When running the standard solution, you'd find the correct retention time ($$t_R$$) for the chloride ion. Once you prepare your calibration curve, you can run your unknown sample (suppose it contains $$\ce{Cl-}$$ and $$\ce{NO3-}$$ ions) and find the area under the chloride peak (Nowadays, all peak areas of the chromatogram are automatically given by the software). Suppose it was $$\pu{22.5 Absorbance/min}$$ as indicated in the image. Since the equation of the best fitted straight line is given automatically ($$y = 0.1312x$$), you can submit the $$y$$-value to it and find the $$x$$-value, which is the concentration in $$\pu{ppm}$$: $$22.5 = 0.1312x \ \Rightarrow \ x = \frac{22.5}{0.1312} \approx \pu{172 ppm}$$ Modern technalogy made these calculations easy by giving the equation for the calibration curve and the areas under the curve. In old-fashion, for example, you have to cut each peak from each printed chromatogram, and weigh them by analytical balance to correlate them for known concentration. Further, you plot the values in a graph paper and make the best fitted straight line. Once, you got the peak area value for your unknown solution, you mark that value in $$y$$-axis and draw a straight line parallel line to $$x$$-axis until it meets the curve. From that crossing point, you again draw a straight line parallel line to $$y$$-axis until it meets $$x$$-axis (see maroon lines in the image). The point second parallel line meet $$x$$-axis is the chloride ion concentration in your unknown sample. Note that you cannot find the $$\ce{NO3-}$$ ion concentration in your unknown sample (see the insert) using the same calibration curve. The detector been used might show different response to the nitrate ion compared to chloride ions. Therefore, you must make different calibration curve for the nitrate ions using standard nitrate solutions. • Thank you for the really thorough explanation! I really appreciate the help!!! – Munchies Sep 15 at 13:07 The line goes according to : $$y = 0.1312~x$$ ; In other words the area of the peak is equal to $$0.1312$$ times the concentration of chloride in ppm. This result is extremely useful. If for example, in a later measurement you obtain a peak whose area is $$y = 35$$, it means that the concentration $$x$$ of chloride ion responsible of this peak is : $$x = 35/0.1312 = 266.8$$ ppm • Thanks for the help! – Munchies Sep 15 at 13:08 If we take 1st and last points on your graph we'd see that: • At concentration $$\pu{350ppm}$$ your detector shows number $$45$$ • At concentration around $$0$$ your detector shows number around $$0$$ • So what if detector shows $$175$$ (which is $$0.5*(350-0))$$? You may conclude that the concentration is $$\pu{22.5ppm}$$ (which $$0.5*(45-0)$$) Calibration curve gives you understanding of the relationship between the number that detector shows and the concentration. So that when you see some reading you could calculate the concentration. Note that 2 points usually isn't enough because you don't necessarily have a straight line. So you need more measurements to see if it's true. If it's not true - you'll have to come up with more complicated equations than $$y=0.1312x$$ and your results will be less precise. • Thank you for the explanation!!! – Munchies Sep 15 at 13:07 • I was wondering why you divide by 2? Sorry i'm really new to this haha – Munchies Sep 15 at 13:18 • @Munchies, I chose $175$ which is in the middle of the last and first point. Which means it's going to be in the middle of the concentration too, so need to multiply by $0.5$ or divide by $2$ - doesn't matter. I could've chosen another point - e.g. $300$ which is $0.85*(350-0)$, then to get the concentration: $(45-0) * 0.85$. – Stanislav Bashkyrtsev Sep 15 at 13:22 • Oh right!!! thank you!! – Munchies Sep 15 at 13:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 48, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594997525215149, "perplexity": 623.0406968612605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00628.warc.gz"}
http://mathoverflow.net/questions/89368/intuition-behind-generic-points-in-a-scheme?answertab=oldest
# Intuition behind generic points in a scheme In a scheme, each point is a generic point of its closure. In particular each closed point is a generic point of itself (the set containing it only), but that's perhaps of little interest. A point that's not closed, is probably more interesting, and there are no such thing in ordinary varieties. What I have been wondering about is that there must be a good reason they are called generic points. Here are what I have got so far: • A non-closed generic point is not closed, so it cannot be cut out from the scheme by any polynomial equations in any affine patch, and thus it does not posses any extra algebraic property that's not shared by others. • A non-closed generic point is not a specialization of the scheme. Are these correct? If not, what's the right intuition? Also, can the following statement be make more precise using the language of generic points? • A degree $n$ and a generic degree $m$ algebraic curves intersect at $n \cdot m$ distinct points in $\mathbb{P}^2$ (the planar Bezout's theorem) • Common solutions of $n$ polynomial systems in $n$ variables with generic complex coefficients in $\mathbb{C}^\ast$ are all isolated. (corollary of the Cheater's homotopy theorem) • The number of common isolated solutions of $n$ polynomial systems in $n$ variables with generic complex coefficients in $\mathbb{C}^\ast$ equals the mixed volume of the Newton polytopes of the system. (Bernshtein's theorem) • Generic points on a nonreduced scheme have isosingular structure (i.e., at all such points the local ring fail to be reduced in exactly the same way). In each case I am familiar with their original meaning of the word generic, but I'm wondering if we can state the genericity conditions using the concept of generic points of schemes. - There are two meanings of a property being generically true: (1) over the generic point and (2) over a nonempty open subset. One important thing about generic points is that in many cases (1) implies (2). See also mathoverflow.net/questions/28496/… –  user2035 Feb 24 '12 at 6:30 To apply generic points to the first one, we take the ring of projective space, or at least the cone on it: $\mathbb C[x,y,z]$, and then quotient out by a degree-$n$ and degree-$m$ homogeneous polynomial. What are the coefficients? The degree-$n$ polynomial is any one you specify, but the coefficients of the degree $m$ polynomial are new invariants. The scheme of possible polynomials is an affine space. The generic point corresponds to looking at things over the field generated by those invariants. –  Will Sawin Feb 24 '12 at 6:41 I'm not sure you actually will get a nice answer that looks like $n\cdot m$ points or the cone on it, though. –  Will Sawin Feb 24 '12 at 6:42 That is indeed the point of view I would like to explore. Could you give me more detail? How do you form such a "scheme of possible polynomials"? I can see that a degree-$m$ polynomial is determined by a list of coefficients, and the space of such polynomials actually form a projective space since scaling of coefficients does not change the polynomial equation. How does this connect to the scheme you were talking about? Thanks. (P.S., your comment is more like an answer than a comment) –  ssquidd Feb 24 '12 at 16:12 I decided to delete my answer since it may not be getting to the heart of you are after. Although, and I don't mean to be impolite, I'm not quite sure what that is. –  Donu Arapura Feb 24 '12 at 17:09 My informal intuition of a generic point is very close to yours. First how do we distinguish non-generic points? Here is a simple rule: if the coordinates of this point satisfy an algebraic relation, then this point is not generic. In other words, if a point is in the zero set of a nonzero polynomial, then it should not be generic. We can turn this on its head and state that a point is generic if it is not contained in the zero set of any nonzero polynomial. If we think in a point-set theoretical way, a generic point is not really a point. Once you name a point, is ceases to be generic. As far as Bezout's theorem stating that a generic poynomial of degree $n$ have $n$ roots, this should be understood as follows: polynomials in a Zariski open subset of the set degree $n$-polynomials have exactly $n$ roots. The complement of this open set is Zariski closed so it is cut-out by a finite number of algebraic equations. For example for degree $2$ polynomials $az^2+bz+c$, the condition $b^2- 4ac\neq 0$ describes a Zariski open set of quadratic polynomials with two distinct roots. Note that measure theoretically, Zariski closed subsets have (Lebesgue) measure zero, so that, with "probability" $1$, a degree $n$-polynomial has exactly $n$-roots. For the Bernshtein theorem, the generic conditions were described explicitly in a related paper by Varchenko. - Sorry, my question was probably not clear: I am familiar with the original meaning of the word generic in each case. I was looking for "new" way of stating them using the language of generic points of schemes. P.S., Bernshtein's original paper actually stated the genericity conditions in a stronger form (part b of the theorem) using face systems (aka. initial form systems). –  ssquidd Feb 24 '12 at 16:28 My favorite view on generic points is found in Mumford's book: Complex Projective Varieties, on page 2. It goes as roughly as follows: Definition: Let $k \subset \mathbb{C}$ be a subfield of the complex numbers and $V$ an affine complex variety. A point $x \in V$ is $k$-generic if every polynomial with values in $k$ that vanishes on $x$, vanishes on all of $V$. Proposition: If $\mathbb{C}/k$ has infinite transcendental degree, then every variety $V$ has a $k$-generic point. Proof: Extend $k$ by all coefficients of a finite set of equations for $V$. Note that $\mathbb{C}/k$ has still infinite transcendental degree. But now $V$ becomes a variety over $k$ in a canonical way. The function field $L$ of $V$ is an $k$-extension of finite transcendental degree, and therefore can be embedded into $\mathbb{C}$. The images of the coordinate function $X_i \in L$ in $\mathbb{C}$ give the coordinates of a $k$-generic point. For the relation to the generic point $\eta \in V$ from scheme theory note the following: • If we define a $k$-Zariski topology on $V(\mathbb{C})$ defined by polynomials over $k$, a $k$-generic point $x$ will be non-closed with closure $V$. • The field $K(x)/k$ generated by the coefficients of $x$, is canonically isomorphic to the function field $L=K(V)$, which is the residue field of the generic point $K(\eta)$. From the abstract perspective the function field $L=K(\eta)$ is as good as $K(x)$, if not better, because it does not depend on choices. Moreover, all $k$-linear algebraic operations cant tell a difference between $\eta$ and $x$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395320177078247, "perplexity": 228.6217717778186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769867.110/warc/CC-MAIN-20141217075249-00148-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.khanacademy.org/math/mappers/measurement-and-data-192-202/x261c2cc7:line-plots-with-fractions/v/marking-data-on-line-plots
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Main content # Graphing data on line plots CCSS.Math: ## Video transcript - [Voiceover] Let's get a little practice doing the Khan Academy exercise Marking Data on Line Plots. So right over here it says, "Tessa measured the distance "from her house to several locations "to the nearest half kilometer." All right, and so they have different locations and it tells us the distance from her house to each of these locations. So from her house to the school is 3 1/2 kilometers, from the library is 5 1/2 kilometers, to the park is two kilometers, the movie theater is four kilometers, to the post office, 5 1/2 kilometers. Then they ask us, "How many dots "should be on a line plot of these measurements?" So a line plot, and they have one down here, a line plot, this is literally a number line and what we do is we put as many dots there are at a certain point in the line. So if we have two data points at 5 1/2 we would put two dots right over there, but we'll get to that in a second. They're just asking us how many dots should be on the line plot of these measurements? Well, we should make a dot for each of these data points and there are one, two, three, four, five data points here. So there's five data points that I'm going to want to put on my line plot. Then they say, "Create a line plot "that shows all of the measurements. "Click above the marking on the number line "that matches each measurement. "Click higher to add more dots above the same marking. "Click lower to add fewer dots." So they say that the distance from the school to Tessa's house is 3 1/2 kilometers so we would want to put a dot at 3 1/2 and I do that just by clicking, clicking right over there, 3 1/2. So that's one data point at 3 1/2. Half way between three and four. Let's do it for the next ones. 5 1/2, that's the distance from the library to her house. 5 1/2, then we have two kilometers. Two kilometers, distance from ... Two kilometers was the distance from the park to her house. Then the movie theater if four kilometers. I wish you could see it all in one screen. Four kilometers. And then the post office is also 5 1/2 kilometers. So you actually have two data points at 5 1/2 kilometers. So right here I've just constructed a line plot. At different points on the number line I'm showing how many data points we have. So we have one data point at two kilometers, one at 3 1/2 kilometers, one at four kilometers, and two at 5 1/2 kilometers. Now notice, we said we would have five dots when we have one, two, three, four, five. One for each data point. Let's do this one more time. Let's check and make sure we got it right. Now let's do one more. All right, measure the length ... Measure the length of each line to the nearest quarter inch to collect data for the line plot below. To make your measurements, drag the ruler on top of the lines. Align the left end of the ruler with the left end of the lines then read the length of each line from the right end of the line. All right, so really I just wanna measure ... I just wanna measure, oh this is ... I didn't mean to tilt it like that. I just wanna measure each of these lines. So this green line right over here looks like it is six and this is split into one, two, three, four. It looks like it's 6 3/4ths. 6 3/4ths, the green line is 6 3/4ths inches long. So we have one data point at 6 3/4ths. So I'll put that right over there. Then we have the red line is ... Let's see, this is 7 2/4ths or 7 1/2. It's halfway between seven and eight. So 7 2/4ths or 7 1/2, so halfway between seven and eight is right over there. Then finally I have the purple line. The purple line, let's see. That is 8 3/4ths inches long. Notice I aligned it right at the beginning right over there and it goes all the way to eight and one, two, three out of the four. 8 3/4ths inches long, so let me, 8 3/4ths. So there you go. I have one line that was 6 3/4ths, one line that was 7 2/4ths or 7 1/2 and then one line that's 8 3/4ths inches long. Let's check our answer. Let's do one more. This is actually a lot of fun. All right, we gotta measure things again. So let's see, the green line. Green line is exactly three inches long, three inches long so let's plot it right over there. Then we have the red line is exactly, let's see, 3 3/4ths inches long. 3 3/4ths, let's put a dot right over there. Then the purple line is three inches long as well. So I have another data point at three inches. So two of the lines were three inches and one of them is 3 3/4ths and we are ... And we're done.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548595905303955, "perplexity": 692.526803766218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00017.warc.gz"}
https://cs.stackexchange.com/questions/27574/how-can-i-compute-the-average-weight-of-an-undirected-graph
# How can I compute the average weight of an undirected graph? Given a weighted, undirected graph $G = (V,E)$, how can I compute the average weight of edges? It seems an easy problem (divide the total weight to the number of edges!) but I couldn't manage to find the sum of the edge weights since each edge can be counted several times while iterating through the vertices. If $w(i,j)$ is the weight of edge $ij$ then: $avg = \frac{1}{2|E|}\sum_{i\neq j}w(i,j)$. Why $2E$ ? Because when traversing vertices you will calculate both edges $ij$ and $ji$. Since it's the same edge, you are adding its weight twice, so you need to remove this redundancy by dividing by 2. Alternatively, without dividing by $2$, I would write: $avg = \frac{1}{|E|} \sum_{i} \sum_{j<i} w(i,j)$ as two for loops in code.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460596442222595, "perplexity": 161.47071573932203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704847953.98/warc/CC-MAIN-20210128134124-20210128164124-00766.warc.gz"}
https://agentfoundations.org/item?id=1364
by Stuart Armstrong 567 days ago | Scott Garrabrant likes this | link | parent A small note: it’s not hard to construct spaces that are a bit too big, or a bit too small (raising the possibility that a true $$X$$ lies between them). For instance, if $$I$$ is the unit interval, then we can map $$I$$ onto the countable-dimensions hypercube $$I^\omega$$ ( https://en.wikipedia.org/wiki/Space-filling_curve#The_Hahn.E2.80.93Mazurkiewicz_theorem ). Then if we pick an ordering of the dimensions of the hypercube and an ordering of $$\mathbb{Q}\cap I$$, we can see any element of $$I^\omega$$ - hence any element of $$I$$ - as a function from $$\mathbb{Q}\cap I$$ to $$I$$. Let $$C(I)$$ be the space of continuous functions $$I \to I$$. Then any element of $$C(I)$$ defines a unique function $$\mathbb{Q}\cap I \to I$$ (the converse is not true - most functions $$\mathbb{Q}\cap I \to I$$ do not correspond to continuous functions $$I \to I$$). Pulling $$C(I)$$ back to $$I$$ via $$I^\omega$$ we define the set $$Y \subset I$$. Thus $$Y$$ maps surjectively onto $$C(I)$$. However, though $$C(I)$$ maps into $$C(Y)$$ by restriction (any function from $$I$$ is a function from $$Y$$), this map is not onto (for example, there are more continuous functions from $$I - \{1/2\}$$ than there are from $$I$$, because of the potential discontinuity at $$1/2$$). Now, there are elements of $$I-Y$$ that map (via $$I^\omega$$) to functions in $$C(Y)$$ that are not in $$C(I)$$. So there’s a hope that there may exist an $$X$$ with $$Y \subset X \subset I$$, $$C(I) \subset C(X) \subset C(Y)$$, and $$X$$ mapping onto $$C(X)$$. Basically, as $$X$$ `gets bigger’, its image in $$C(Y)$$ grows, while $$C(X)$$ itself shrinks, and hopefully they’ll meet. NEW DISCUSSION POSTS [Note: This comment is three by Ryan Carey on A brief note on factoring out certain variables | 0 likes There should be a chat icon by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes Apparently "You must be by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like There is a replacement for by Alex Mennen on Meta: IAFF vs LessWrong | 1 like Regarding the physical by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes I think that we should expect by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes I think I understand your by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes This seems like a hack. The by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes After thinking some more, by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes Yes, I think that we're by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes My intuition is that it must by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes To first approximation, a by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes Actually, I *am* including by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes Yeah, when I went back and by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes > Well, we could give up on by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813827395439148, "perplexity": 1513.1806892538145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00324.warc.gz"}
http://www.maths.soton.ac.uk/EMIS/proceedings/Paseky95/12.html
J. Rákosník (ed.), Function spaces, differential operators and nonlinear analysis. Proceedings of the conference held in Paseky na Jizerou, September 3-9, 1995. Mathematical Institute, Czech Academy of Sciences, and Prometheus Publishing House, Praha 1996 p. 159 - 182 # Composition operators acting on Sobolev spaces of fractional order---a survey on sufficient and necessary conditions ## Winfried Sickel Mathematisches Institut, Fakultät für Mathematik und Informatik, Friedrich-Schiller-Universität, Leutragraben 1, 07743 Jena, Germany [email protected] Abstract: Let $A^s_{p,q}$ denote either a Besov space $\bspq(\Bbb R^n)$ or a Lizorkin--Triebel space $F^s_{p,q}(\Bbb R^n)$. Let $G:\Bbb R^n \to \Bbb R^n$ be continuous. Then the composition operator $T_G:f \to G(f)$ makes sense. It is known that one has the implication $$T_G (A^s_{p,q}(\Bbb R^n)) \subset A^s_{p,q}(\Bbb R^n) \qquad \Longrightarrow \qquad G(t)=c \, t \quad \text{for some } c \in \Bbb R^n$$ provided that $1 \le p < \infty, \quad 1 \le q \le \infty \quad \text{and} \quad 1 + \frac{1}{p} < s < \frac{n}{p}$. In view of this negative result it is natural to ask about properties of the function $G$ implied by an embedding like $$T_G (A^s_{p,q}\,(\Bbb R^n)) \subset B^r_{p,q} (\Bbb R^n) \tag 1$$ for some $r$, $0<r<s<n/p$. \proclaim{Theorem} Let $1 \le p < \infty$, $1 \le q \le \infty$ and $0 < r < s < n/p$. Suppose further $G \in C(\Bbb R^n)$. Then {\rm(1)} implies $G(0)=0$, $G \in B^{r,\text{loc}}_{p,q}(\Bbb R^n)$, and $$\int_1^\infty w^{-\gamma p-1} \| G(\cdot) \psi(\frac{\cdot}{w}) | \dot{\Cal B}^r_{p,p}(\Bbb R^n)\|^p dw < \infty$$ for all\quad $\gamma > \dfrac{\frac{n}{p} + \frac{1}{p}(\frac{n}{p}-s)-r (\frac{n}{p}-s+1)}{\frac{n}{p}-s}$. Here $\psi$ denotes a smooth cut-off function supported around the origin, $m>r$ some natural number and $$\| f|\dot{\Cal B}^r_{p,p}(\Bbb R^n)\|^p = \int_{-1}^1 |h|^{-rp - 1} \int_{-\infty}^\infty \left| \sum_{j=0}^{m} (-1)^j \binom mj f(y+(m-j)h)\right|^p dy dh.$$ \endproclaim Full text of the article:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713090062141418, "perplexity": 767.3543820892353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00052-ip-10-164-35-72.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/454892/chi-squared-test-degrees-of-freedom-and-experiment-design
# Chi-squared test degrees of freedom and experiment design In the chi-squared test of independence for a $$r\times c$$ contingency table, the degrees of freedom are set as $$(r-1)(c-1)$$, which can be seen as a goodness-of-fit test with $$n - 1 - s$$ degrees of freedom where $$n = rc$$ is the number of categories and $$s = (r-1) + (c-1)$$ is due to fitting the row and column marginal probabilities to the data. If the experiment is such that the row sums are pre-determined, I think the test becomes a test of homogeneity, which according to Wikipedia has the same degrees of freedom. However, as only the column marginal probabilities need to be fitted from the data, it seems like one should use $$s = c - 1$$. Why is this not the case? Maybe some other intuition than the analogy to the goodness-of-fit test is more adequate. • The principles and assumptions of the chi-squared test are presented in my post at stats.stackexchange.com/a/17148/919. – whuber Mar 20 '20 at 12:54 • @whuber Thank you for the reference, I posted a tentative answer and would be grateful if you could see if it makes sense. – udscbt Mar 21 '20 at 10:36 As mentioned in the answer linked by whuber, this way of computing degrees of freedom for the goodness-of-fit test is but a heuristic. Whatever experiment design, the hypothesized independence of the row and column variables enforces the following $$1 + (r-1) + (c-1)$$ constraints: • $$\sum_{i=1}^r\sum_{j=1}^c O_{i,j} / N = 1$$, • $$O_{i,c} / N \approx (\sum_{j=1}^c O_{i,j})(\sum_{k=1}^r O_{k,c}) / N^2$$ for $$i < r$$, • $$O_{r,j} / N \approx (\sum_{k=1}^c O_{r,k})(\sum_{i=1}^r O_{i,j}) / N^2$$ for $$j < c$$. with $$O_{i,j}$$ the count in cell $$(i,j)$$, $$N$$ the total number of observations and $$\approx$$ becoming an equality asymptotically. This leaves $$(r-1)(c-1)$$ degrees of freedom for the counts in the table and the goodness-of-fit test checks if the cell probabilities match some particular values. These values may come from the experiment design or from the data itself but it does not change the degrees of freedom for the test statistic. However, it may change the p-value. For example, say the experiment was designed with 1/2 probability for each row but the empirical probabilities are 48/100 and 52/100. If 1/2 is used for the test, the p-value would be different from that for which the empirical ones are used.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.904438853263855, "perplexity": 226.48033741794484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00350.warc.gz"}
https://planetmath.org/completemeasure
# complete measure A measure space $(X,\mathscr{S},\mu)$ is said to be complete if every subset of a set of measure $0$ is measurable (and consequently, has measure $0$); i.e. if for all $E\in\mathscr{S}$ such that $\mu(E)=0$ and for all $S\subset E$ we have $\mu(S)=0$. If a measure space is not complete, there exists a completion (http://planetmath.org/CompletionOfAMeasureSpace) of it, which is a complete measure space $(X,\overline{\mathscr{S}},\overline{\mu})$ such that $\mathscr{S}\subset\overline{\mathscr{S}}$ and $\overline{\mu}_{|\mathscr{S}}=\mu$, where $\overline{\mathscr{S}}$ is the smallest $\sigma$-algebra containing both $\mathscr{S}$ and all subsets of elements of zero measure of $\mathscr{S}$. Title complete measure CompleteMeasure 2013-03-22 14:06:56 2013-03-22 14:06:56 Koro (127) Koro (127) 5 Koro (127) Definition msc 28A12 UniversallyMeasurable completion complete
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 14, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9944012761116028, "perplexity": 350.85434643071125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00252.warc.gz"}
https://en.wikipedia.org/wiki/Truncated_distribution
# Truncated distribution Support Probability density functionProbability density function for the truncated normal distribution for different sets of parameters. In all cases, a = −10 and b = 10. For the black: μ = −8, σ = 2; blue: μ = 0, σ = 2; red: μ = 9, σ = 10; orange: μ = 0, σ = 10. ${\displaystyle x\in (a,b]}$ ${\displaystyle {\frac {g(x)}{F(b)-F(a)}}}$ ${\displaystyle {\frac {\int _{a}^{x}g(t)dt}{F(b)-F(a)}}={\frac {F(x)-F(a)}{F(b)-F(a)}}}$ ${\displaystyle {\frac {\int _{a}^{b}xg(x)dx}{F(b)-F(a)}}}$ ${\displaystyle F^{-1}\left({\frac {F(a)+F(b)}{2}}\right)}$ In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or even to know about, occurrences is limited to values which lie above or below a given threshold or within a specified range. For example, if the dates of birth of children in a school are examined, these would typically be subject to truncation relative to those of all children in the area given that the school accepts only children in a given age range on a specific date. There would be no information about how many children in the locality had dates of birth before or after the school's cutoff dates if only a direct approach to the school were used to obtain information. Where sampling is such as to retain knowledge of items that fall outside the required range, without recording the actual values, this is known as censoring, as opposed to the truncation here.[1] ## Definition The following discussion is in terms of a random variable having a continuous distribution although the same ideas apply to discrete distributions. Similarly, the discussion assumes that truncation is to a semi-open interval y ∈ (a,b] but other possibilities can be handled straightforwardly. Suppose we have a random variable, ${\displaystyle X}$ that is distributed according to some probability density function, ${\displaystyle f(x)}$, with cumulative distribution function ${\displaystyle F(x)}$ both of which have infinite support. Suppose we wish to know the probability density of the random variable after restricting the support to be between two constants so that the support, ${\displaystyle y=(a,b]}$. That is to say, suppose we wish to know how ${\displaystyle X}$ is distributed given ${\displaystyle a. ${\displaystyle f(x|a where ${\displaystyle g(x)=f(x)}$ for all ${\displaystyle a and ${\displaystyle g(x)=0}$ everywhere else. Notice that ${\displaystyle Tr(x)}$ has the same support as ${\displaystyle g(x)}$. Notice that in fact ${\displaystyle f(x|a is a density: ${\displaystyle \int _{a}^{b}f(x|a. Truncated distributions need not have parts removed from the top and bottom. A truncated distribution where just the bottom of the distribution has been removed is as follows: ${\displaystyle f(x|X>y)={\frac {g(x)}{1-F(y)}}}$ where ${\displaystyle g(x)=f(x)}$ for all ${\displaystyle y and ${\displaystyle g(x)=0}$ everywhere else, and ${\displaystyle F(x)}$ is the cumulative distribution function. A truncated distribution where the top of the distribution has been removed is as follows: ${\displaystyle f(x|X\leq y)={\frac {g(x)}{F(y)}}}$ where ${\displaystyle g(x)=f(x)}$ for all ${\displaystyle x\leq y}$ and ${\displaystyle g(x)=0}$ everywhere else, and ${\displaystyle F(x)}$ is the cumulative distribution function. ## Expectation of truncated random variable Suppose we wish to find the expected value of a random variable distributed according to the density ${\displaystyle f(x)}$ and a cumulative distribution of ${\displaystyle F(x)}$ given that the random variable, ${\displaystyle X}$, is greater than some known value ${\displaystyle y}$. The expectation of a truncated random variable is thus: ${\displaystyle E(X|X>y)={\frac {\int _{y}^{\infty }xg(x)dx}{1-F(y)}}}$ where again ${\displaystyle g(x)}$ is ${\displaystyle g(x)=f(x)}$ for all ${\displaystyle x>y}$ and ${\displaystyle g(x)=0}$ everywhere else. Letting ${\displaystyle a}$ and ${\displaystyle b}$ be the lower and upper limits respectively of support for the original density function ${\displaystyle f}$ (which we assume is continuous), properties of ${\displaystyle E(u(X)|X>y)}$, where ${\displaystyle u}$ is some continuous function with a continuous derivative, include: (i) ${\displaystyle \lim _{y\to a}E(u(X)|X>y)=E(u(X))}$ (ii) ${\displaystyle \lim _{y\to b}E(u(X)|X>y)=u(b)}$ (iii) ${\displaystyle {\frac {\partial }{\partial y}}[E(u(X)|X>y)]={\frac {f(y)}{1-F(y)}}[E(u(X)|X>y)-u(y)]}$ and ${\displaystyle {\frac {\partial }{\partial y}}[E(u(X)|X (iv) ${\displaystyle \lim _{y\to a}{\frac {\partial }{\partial y}}[E(u(X)|X>y)]=f(a)[E(u(X))-u(a)]}$ (v) ${\displaystyle \lim _{y\to b}{\frac {\partial }{\partial y}}[E(u(X)|X>y)]={\frac {1}{2}}u'(b)}$ Provided that the limits exist, that is: ${\displaystyle \lim _{y\to c}u'(y)=u'(c)}$, ${\displaystyle \lim _{y\to c}u(y)=u(c)}$ and ${\displaystyle \lim _{y\to c}f(y)=f(c)}$ where ${\displaystyle c}$ represents either ${\displaystyle a}$ or ${\displaystyle b}$. ## Examples The truncated normal distribution is an important example.[2] The Tobit model employs truncated distributions. Other examples include truncated binomial at x=0 and truncated poisson at x=0. ## Random truncation Suppose we have the following set up: a truncation value, ${\displaystyle t}$, is selected at random from a density, ${\displaystyle g(t)}$, but this value is not observed. Then a value, ${\displaystyle x}$, is selected at random from the truncated distribution, ${\displaystyle f(x|t)=Tr(x)}$. Suppose we observe ${\displaystyle x}$ and wish to update our belief about the density of ${\displaystyle t}$ given the observation. First, by definition: ${\displaystyle f(x)=\int _{x}^{\infty }f(x|t)g(t)dt}$, and ${\displaystyle F(a)=\int _{x}^{a}\left[\int _{-\infty }^{\infty }f(x|t)g(t)dt\right]dx.}$ Notice that ${\displaystyle t}$ must be greater than ${\displaystyle x}$, hence when we integrate over ${\displaystyle t}$, we set a lower bound of ${\displaystyle x}$. The functions ${\displaystyle f(x)}$ and ${\displaystyle F(x)}$ are the unconditional density and unconditional cumulative distribution function, respectively. By Bayes' rule, ${\displaystyle g(t|x)={\frac {f(x|t)g(t)}{f(x)}},}$ which expands to ${\displaystyle g(t|x)={\frac {f(x|t)g(t)}{\int _{x}^{\infty }f(x|t)g(t)dt}}.}$ ### Two uniform distributions (example) Suppose we know that t is uniformly distributed from [0,T] and x|t is distributed uniformly on [0,t]. Let g(t) and f(x|t) be the densities that describe t and x respectively. Suppose we observe a value of x and wish to know the distribution of t given that value of x. ${\displaystyle g(t|x)={\frac {f(x|t)g(t)}{f(x)}}={\frac {1}{t(\ln(T)-\ln(x))}}\quad {\text{for all }}t>x.}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 72, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595407247543335, "perplexity": 815.8249565634497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738552.17/warc/CC-MAIN-20200809102845-20200809132845-00037.warc.gz"}
https://math.stackexchange.com/questions/2272139/is-the-compact-complement-topology-locally-arc-connected
# Is the compact complement topology locally arc connected? Let $\tau$ be the standard topology on $\mathbb{R}$ and let $\tau'$ be the compact complement topology on $\mathbb{R}$: $$\tau' = \{\mathbb{R} \setminus K \mid K \text{ is \tau-compact}\} \cup \{\varnothing, \mathbb{R}\}$$ π-Base claims that $(\mathbb{R},\tau')$ is locally arc-connected. I believe the proof that it is arc-connected: since $\tau' \subseteq \tau$, the identity map $id:(\mathbb{R}, \tau) \rightarrow (\mathbb{R},\tau')$ is a continuous injection and thus gives an arc between any two distinct points. They give the same proof for local arc-connectedness, but it is insufficient since we must show that every nonempty open set contains an arc-connected open neighborhood of each of its points. In particular, I'm skeptical that any nonempty, proper, $\tau'$-open set can even be path-connected. • Isn't every $\tau'$-open subset homeomorphic to $(X,\tau')$? – Henno Brandsma May 8 '17 at 21:06 • @HennoBrandsma Do you have a proof of this? – ngenisis May 8 '17 at 21:57 • In fact, no proper subset that is unbounded on both sides is homeomorphic to $(\mathbb{R},\tau')$. The space $(\mathbb{R},\tau')$ has the property that every Hausdorff subset is contained in a connected Hausdorff subset, but no proper subset that is unbounded on both sides has the same property. (Note that a subset is Hausdorff iff it is bounded.) – Eric Wofsey May 8 '17 at 22:07 • @EricWofsey I see that a subset is Hausdorff iff it is bounded and thus that every Hausdorff subset is contained in the connected Hausdorff subset $[-N,N]$ for some $N>0$. Why do the sets unbounded on both sides fail to satisfy this property? – ngenisis May 8 '17 at 22:40 • Actually, by ad hoc arguments, you can rule out all the other cases and show that no proper subset of $(\mathbb{R},\tau')$ is homeomorphic to it. If no such $x<a<y$ exists and $X$ is compact but not Hausdorff, then $X$ must be a closed unbounded interval; say $X=[x,\infty)$. But then you can remove one point from $X$ to obtain $(x,\infty)$, which has the property that every Hausdorff subset is contained in a connected Hausdorff subset. Since you can't remove a point from $\mathbb{R}$ to get a subset with that property, $\mathbb{R}\not\cong[x,\infty)$. – Eric Wofsey May 8 '17 at 22:48 You are correct that the given argument does not work. However, $(\mathbb{R},\tau')$ is in fact locally arc-connected. It suffices to show that if $a<b<c<d$ then the open set $U=(-\infty,a)\cup(b,c)\cup(d,\infty)$ is arc-connected, since such open sets form a basis. I will show that if $x\in (d,\infty)$ and $y\in (b,c)$ then there is a continuous injection $f:[0,1]\to U$ with $f(0)=x$ and $f(1)=y$; the other cases are similar. To define $f$, first let $g:[0,1)\to[x,\infty)$ be an order-isomorphism. Then define $f(t)=g(t)$ for $t\in[0,1)$ and $f(1)=y$. Clearly $f$ is injective, and it is continuous at any $t\in[0,1)$ since $g$ is continuous with respect $\tau$ and hence also with respect to $\tau'$. So we just need to check continuity at $1$. If $V\subseteq U$ is any $\tau'$-open set containing $y$, then $V$ must contain $(z,\infty)$ for some $z\in[x,\infty)$. Thus $f^{-1}(V)$ contains the entire interval $(g^{-1}(z),1]$, and in particular contains a neighborhood of $1$. Thus $f$ is continuous at $1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894434809684753, "perplexity": 113.44213988199691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00357.warc.gz"}
http://science.sciencemag.org/content/341/6141/40
PerspectiveAstronomy # Radio Bursts, Origin Unknown See allHide authors and affiliations Science  05 Jul 2013: Vol. 341, Issue 6141, pp. 40-41 DOI: 10.1126/science.1240618 ## Summary Analyzing the transient electromagnetic signals pervading the cosmos has led to the identification of a plethora of exotic astrophysical objects. On page 53 of this issue, Thornton et al. (1) report on the discovery of several short radio bursts, only a few milliseconds in duration, from four widely spaced directions during a survey of the sky using the Parkes radio telescope in Australia. The bursts are not unlike individual pulses seen from pulsars, neutron stars whose spins regularly sweep a lighthouse-like beam into Earth's direction. Because of their weak emission, the majority of known pulsars reside within the Milky Way or in one of its satellite dwarf galaxies, the Large and Small Magellanic Clouds. By contrast, the new bursts appear to originate from large distances, outside the Milky Way. Truly remarkable is that a burst rate of about 104 bursts per day over the entire sky has been deduced. It is still early days for identifying the astrophysical origins of such common but (so far) rarely detected events.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158016204833984, "perplexity": 1861.3268563269685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514708.24/warc/CC-MAIN-20181022050544-20181022072044-00143.warc.gz"}
https://diophantus.org/arxiv/0704.0779
# diophantus Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to [email protected] #### High Energy Variability Of Synchrotron-Self Compton Emitting Sources: Why One Zone Models Do Not Work And How We Can Fix It 05 Apr 2007 astro-ph arxiv.org/abs/0704.0779 Abstract. With the anticipated launch of GLAST, the existing X-ray telescopes, and the enhanced capabilities of the new generation of TeV telescopes, developing tools for modeling the variability of high energy sources such as blazars is becoming a high priority. We point out the serious, innate problems one zone synchrotron-self Compton models have in simulating high energy variability. We then present the first steps toward a multi zone model where non-local, time delayed Synchrotron-self Compton electron energy losses are taken into account. By introducing only one additional parameter, the length of the system, our code can simulate variability properly at Compton dominated stages, a situation typical of flaring systems. As a first application, we were able to reproduce variability similar to that observed in the case of the puzzling `orphan' TeV flares that are not accompanied by a corresponding X-ray flare. # Reviews There are no reviews yet.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8295651078224182, "perplexity": 2497.975345227207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00075.warc.gz"}
https://cms.math.ca/10.4153/CMB-1998-036-7
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals → CMB Abstract view # Dihedral groups of automorphisms of compact Riemann surfaces In this note we determine which dihedral subgroups of $\GL_g(\D C)$ can be realized by group actions on Riemann surfaces of genus $g>1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040330648422241, "perplexity": 956.0318786526395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320489.26/warc/CC-MAIN-20170625101427-20170625121427-00241.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-and-state-the-excluded-values-for-g-6g-55-g
Algebra Topics # How do you simplify and state the excluded values for (g²-6g-55) /(g)? Jun 13, 2015 $g = 0$ is an excluded value. Factoring $\frac{{g}^{2} - 6 g - 55}{g}$ might be considered "simplifying". #### Explanation: $g = 0$ is an excluded value, since division by zero is not permitted. $\frac{{g}^{2} - 6 g - 55}{g}$ can be factored as $\textcolor{w h i t e}{\text{XXXX}}$ $\textcolor{w h i t e}{\text{XXXX}}$ $\frac{1}{g} \cdot \left(g + 5\right) \left(g - 11\right)$ ##### Impact of this question 191 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8018617630004883, "perplexity": 3238.2908814336092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593994.14/warc/CC-MAIN-20200118221909-20200119005909-00338.warc.gz"}
http://tex.stackexchange.com/questions/39831/uppercase-smallcaps-with-latex-not-xelatex
Uppercase Smallcaps with LaTeX (not XeLaTeX)? In XeLaTeX there is an UppercaseSmallCaps font option (as mentioned in this question for example) that will cause upper-case letters to be rendered in small-caps and lower-case letters to remain. Question: How can UppercaseSmallCaps be implemented in plain LaTeX (or pdfLaTeX) so that text like \upsc{ABCdef} will be rendered as \textsc{abc}def? - Hmmm. Having a really hard time deciding between egreg and Joseph Wright's solutions! – mforbes Jan 4 '12 at 5:53 Maybe a bit over the top, but you could use l3regex to achieve this \documentclass{article} \usepackage{l3regex,xparse} \ExplSyntaxOn \cs_generate_variant:Nn \tl_rescan:nn { nV } \NewDocumentCommand \upsc { m } { \tl_set:Nn \l_tmpa_tl {#1} \regex_replace_all:nnN { ([A-Z]+) } { \c{textsc} \cB\{ \c{lowercase} \cB\{ \1 \cE\} \cE\} } \l_tmpa_tl \tl_use:N \l_tmpa_tl } \ExplSyntaxOff \begin{document} \upsc{ABCabcDEF} \end{document} This uses the latest experimental version of l3regex, which is on CTAN and which has an 'extended regex' system to allow the inclusion of category-code information in the search and replace text. You could set up a search for upper-case letters in the same way using alternative methods (such as xstring). Explanation of the code The basic idea here is to store the text, do a replacement then print it again. I've chosen to replace each upper case <Letter> by \textsc{\lowercase{<Letter>}} The search part is a simple regex ([A-Z]+), which will capture the letter as \1 (in the replace part). There, \c{textsc} will create the control sequence \textsc in the updated text and \c{lowercase} creates \lowercase. \1 is the captured letter from the search part, while \cB\{ and \cE\} possibly need a bit of explanation. In the replacement text, \cB\{ creates the character { (which needs to be escaped) with category code 'Begin group'. Similarly, \cE\} creates the character } with category code 'End group'. The final stage is to insert the result into the input stream. If you want to see what is happening, try adding \tl_show:N \l_tmpa_tl after the \tl_use:N line. - You can't use fontspec with pdfTeX, so UppercaseSmallCaps is not directly relevant. – Joseph Wright♦ Jan 2 '12 at 14:25 Support for accents would be more challenging (they'll work in the lower case part but not the upper case part). – Joseph Wright♦ Jan 2 '12 at 14:38 Nice demonstration of LaTeX3! Better than egreg's in that it works with spaces and paragraphs (with a small modification), but an order of magnitude slower, and disables kerning... – mforbes Jan 4 '12 at 5:48 \documentclass[convert,border=1]{standalone} \makeatletter \newif\ifsc@active \newif\ifnf@active \def\upsc#1{{\sc@activefalse\nf@activetrue\@upsc#1\@nil}} \def\@upsc#1{\ifx#1\@nil\else\@@upsc{#1}\expandafter\@upsc\fi} \def\@@upsc#1{% \ifnum\uccode#1=#1\relax \ifsc@active\else\sc@activetrue\nf@activefalse\scshape\fi \expandafter\@firstoftwo \else \ifsc@active\sc@activefalse\fi \ifnf@active\else\nf@activetrue\normalfont\fi \expandafter\@secondoftwo \fi {\lowercase{#1}}% {#1}} \makeatother \begin{document} \upsc{xABCdovff} \end{document} Doesn't work with accented letters, of course. - Isn't the point of the question that we're not allowed to use XeTeX? – Joseph Wright Jan 2 '12 at 14:37 @JosephWright Now it works with pdflatex (and does ligatures and kerning when possible). – egreg Jan 2 '12 at 15:49 Nice fast solution! Better than Joseph's solution in that it is an order of magnitude faster and does ligatures and kerning, but it does not allow for spaces, paragraphs etc. – mforbes Jan 4 '12 at 5:53 @mforbes No, of course. But I believed it was for acronyms or abbreviations. – egreg Jan 4 '12 at 9:41 \def\upsc#1{{...}} should have an enclosing group so that the smallcaps do not leak. Will fix later when I test more thoroughly. – mforbes Jan 17 '12 at 6:07 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8670518398284912, "perplexity": 4188.539864336103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00091-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/qm-oscillator-question.208151/
QM Oscillator Question 1. Jan 10, 2008 div curl F= 0 1. The problem statement, all variables and given/known data "Write down the operator $$\hat{a}^2$$ in the basis of the energy states $$|n>$$. Determine the eigenvalues and eigenvectors of the operator $$\hat{a}^2$$ working in the same basis. You may use the relation: $$\sum_{k = 0}^{\infty} \frac{|x|^{2k}}{(2k)!} = cosh(|x|)$$" 2. Relevant equations 3. The attempt at a solution For the first part, I've got the abstract version of the operator to be: $$\hat{a}^2 = \sum_{n=0}^{\infty} \sqrt{n(n-1)} |n-2><n|$$ but the second part is giving me some trouble. I'm not too sure how to set about it, I've tried a few different approaches but nothing ends up using the above relation. I've tried a coherent state: $$\hat{a} |n> = \lambda |n>$$, and I've tried a ket composed on the basis n: $$|\psi> = \sum_{n=0}^{\infty} C_n |n>$$. I'd be grateful if somebody could show me the way with this question, I've just hit a brick wall with it. 2. Jan 14, 2008 µ³ Well, $\hat{a}^2$ commutes with $\hat{a}$ so any eigenvector of the latter is an eigenvector of the former. So to compute the eigenvalues, just operate $\hat{a}^2$ on a coherent state. 3. Jan 14, 2008 StatusX To continue along the lines of the previous post, assume |A> is an eigenvector of a^2 with eigenvalue A^2. Does this mean it is an eigenvector of a with eigenvalue A? Well, first of all, we see it could just as well have eigenvalue -A. And because of this, we could in general expect to find: |A> = |B> + |C> where a|B>=A|B> and a|C>=-A|C>, giving: a|A> = A |B> - A |C> so that |A> is not an eigenvector of a unless either |B> or |C> is zero, and yet: a^2 |A> = A^2 |B> + A^2 |C> = A^2 |A> In other words, while all the eigenvectors of a will still be eigenvectors of a^2, there will in general be some new ones as well, given by linear combinations of a-eigenvectors with eigenvalues that are negatives of eachother. With a little work you can show these are the only new eigenvectors. Similar Discussions: QM Oscillator Question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192882776260376, "perplexity": 528.2368254833167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612283.85/warc/CC-MAIN-20170529111205-20170529131205-00416.warc.gz"}
https://indico.cern.ch/event/773605/contributions/3471188/
The 21st International Workshop on Neutrinos from Accelerators (NUFACT2019) Aug 25 – 31, 2019 The Grand Hotel Daegu Asia/Seoul timezone ENUBET Aug 29, 2019, 5:06 PM 22m Dynasty Hall (2F) (The Grand Hotel Daegu) Speaker Francesco Terranova (Universita & INFN, Milano-Bicocca (IT)) Description The knowledge of initial flux, energy and flavor of current neutrino beams is currently the main limitation for a precise measurement of neutrino cross sections. The ENUBET ERC project (2016-2021) is studying a facility based on a narrow band neutrino beam capable of constraining the neutrino fluxes normalization through the monitoring of the associated charged leptons in an instrumented decay tunnel. Since March 2019, ENUBET is also a CERN Neutrino Platform project (NP06/ENUBET) developed in collaboration with CERN A&T and CERN-EN. In ENUBET, the identification of large-angle positrons from Ke3 decays at single particle level can potentially reduce the νe flux uncertainty at the level of 1%. This setup would allow for an unprecedented measurement of the νe cross section at the GeV scale. Such an experimental input would be highly beneficial to reduce the budget of systematic uncertainties in the next long baseline oscillation projects (i.e HyperK-DUNE). Furthermore, in narrow-band beams, the transverse position of the neutrino interaction at the detector can be exploited to determine a priori with significant precision the neutrino energy spectrum without relying on the final state reconstruction. This contribution will present the final design of the ENUBET demonstrator, which has been selected on April 2019 on the basis of the results of the 2016-2018 testbeams. It will also discuss advances in the design and simulation of the hadronic beam line. Special emphasis will be given to a static focusing system of secondary mesons that, unlike the other studied horn-based solution, can be coupled to a slow extraction proton scheme. The consequent reduction of particle rates and pile-up effects makes the determination of the νμ flux through a direct monitoring of muons after the hadron dump viable, and paves the way to a time-tagged neutrino beam. Time-coincidences among the lepton at the source and the neutrino at the detector would enable an unprecedented purity and the possibility to reconstruct the neutrino kinematics at source on an event by event basis. We will also present the performance of positron tagger prototypes tested at CERN beamlines, a full simulation of the positron reconstruction chain and the expected physics reach of ENUBET. Working Group WG2 : Neutrino Scattering Physics Primary author Francesco Terranova (Universita & INFN, Milano-Bicocca (IT))
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428233504295349, "perplexity": 2628.2536728500177}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00341.warc.gz"}
https://dsp.stackexchange.com/questions/43746/how-does-complex-data-of-an-fft-and-iq-modulation-differ
How does complex data of an FFT and IQ modulation differ? If I have an arbitrary time domain signal (e.g. a simple cosine) i can sample it with an ADC and get a time discrete representation. I can perform an FFT on the samples and get a complex output. The phase of the signal is: $$\phi=\arctan\frac\Im\Re$$ Using an IQ down modulation of the same signal gives complex data, too, with the phase being: $$\phi=\arctan\frac QI$$ Is there a difference between these two and when is one preferred over the other? EDIT Thanks for the input! @Dilip Sarwate: Here is what I did: set $\phi=60°$ (see your example) The results are exactly what I expected. I can see the frequency of 1 Hz in the spectrum (plot 3) and the initial phase of $60°$ in the phase spectrum (plot 4). I get the same result when I use a non-complex function, e.g. $x(t)=cos(2\pi t+\phi)$. I read everywhere that phase information is lost if I don't use IQ down-modulation, but that does not seem to be the case, as I can see the phase information in the plots. Or am I missing something here? Since you are interested in cosines, take, for example, the signal $x(t) = \exp(j(2\pi t + \theta))$ which is a complex sinusoid of period $1$ and sample it $16$ times per second to get $16$ samples $x[n]$, $n = 0, 1, \dots, 15$, where $x[n] = x\left(\frac{n}{16}\right) = \exp\left(j\left(\pi \frac{n}{8} + \theta\right)\right), n=0,1,\ldots, 15.$ Then, take the 16-point FFT of $\mathbf{x} =\big(x[0], x[1], \ldots, x[15]\big)$ which will give you $\mathbf{X} =\big(X[0], X[1], \ldots, X[15]\big)$. Report back to us what $\mathbf{x}$ and $\mathbf X$ are and whether you notice any differences between them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.818812906742096, "perplexity": 327.1276781776607}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00443.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3777061
# Renormalization Group for dummies by waterfall Tags: dummies, renormalization P: 381 Quote by atyy One way to think about the changing constants is to realise you are just writing and "effective theory". In the Box-Jenkins philosophy: all models are wrong, but some are useful. Say you have a curve. Every point on the curve can be approximated by a straight line. Depending on which part of the curve you are approximating, the slope of the line will change. The straight line is your "effective theory" and the changing slope like your changing coupling constant. This example is not a detailed comparison, but it's the general philosophy of the renormalization group. As for detailed mathematical correspondence, apart from quantum field theory, the renormalization group has been applied in classical statistical mechanics and classical mechanics. I read in wiki that "Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance." So you are saying that Renormalization Group concepts and regulator thing are also used in biology, economics, finance and not just in QFT? So in the calculations in biology. The coupling constant equivalent can become infinite in the second term but if one makes a cut-off at first term. it is finite? P: 8,800 Quote by waterfall I read in wiki that "Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance." So you are saying that Renormalization Group concepts and regulator thing are also used in biology, economics, finance and not just in QFT? So in the calculations in biology. The coupling constant equivalent can become infinite in the second term but if one makes a cut-off at first term. it is finite? Renormalization has nothing to do with infinities. QED is renormalizable and it has a cut-off - it is not a true theory valide at all energies, it is only an effective theory like gravity, valid below the Planck scale. Once you have a cut-off, there are no infinities. Sometimes you are lucky and you get a theory where you can remove the cut-off, like QCD. But in QED, as far as we know, the cut-off probably cannot be removed. P: 381 Quote by atyy Renormalization has nothing to do with infinities. QED is renormalizable and it has a cut-off - it is not a true theory valide at all energies, it is only an effective theory like gravity, valid below the Planck scale. Once you have a cut-off, there are no infinities. Sometimes you are lucky and you get a theory where you can remove the cut-off, like QCD. But in QED, as far as we know, the cut-off probably cannot be removed. I can't believe why no one is answer my simple question. I know in power series, one expand the terms. I'm asking if the techniques in Renormalizaton Group can be applied to Finance or Biology problem, not just in QFT. If one knows the answer to this. Please let me know. Thanks. P: 1,940 Quote by waterfall I'm asking if the techniques in Renormalizaton Group can be applied to Finance or Biology problem Then perhaps you should post the question in the Biology forum. P: 8,800 Quote by waterfall I can't believe why no one is answer my simple question. I know in power series, one expand the terms. I'm asking if the techniques in Renormalizaton Group can be applied to Finance or Biology problem, not just in QFT. If one knows the answer to this. Please let me know. Thanks. In very, very broad terms - yes - everyone uses effective theories. Less facetiously, the philosophy of the central limit theorem is like the renormalization group. Take the heights of people. That's due to hundreds of genes, whose interactions we hardly understand. Yet when you plot a distribution of heights of populations, very often you get a Gaussian distribution characterized fully by two parameters: mean and variance. So if you are looking coarsely on a population level, you don't need the theory of all the genes with hundreds of parameters - you just need the Gaussian distribution with two parameters. Here the Gaussian distribution is derived using the renormalization group: http://www.math.princeton.edu/facult...gorovLec07.pdf P: 381 Quote by atyy In very, very broad terms - yes - everyone uses effective theories. Less facetiously, the philosophy of the central limit theorem is like the renormalization group. Take the heights of people. That's due to hundreds of genes, whose interactions we hardly understand. Yet when you plot a distribution of heights of populations, very often you get a Gaussian distribution characterized fully by two parameters: mean and variance. So if you are looking coarsely on a population level, you don't need the theory of all the genes with hundreds of parameters - you just need the Gaussian distribution with two parameters. Here the Gaussian distribution is derived using the renormalization group: http://www.math.princeton.edu/facult...gorovLec07.pdf Thanks. So it applies elsewhere too. Going back to QED and the coupling constant. When higher-order terms in the perturbation series is used, the length scale becomes smaller. I wonder if this is the reason why the coupling constant becomes larger when more terms are used. Bill Hobba kept saying this but he didn't explain the physical reason why. Maybe it's because as more virtual particles or more terms are used, the length scale get smaller and smaller and hence the coupling constant secretly being dependent on how many terms means the coupling constant is equal to the sum of the feynmann vertex in the virtual particles interactions in 1st, 2nd, 3rd terms (I'm aware virtual particles are just the terms in the perturbation expansion so I'll use them interchangeably)? How do you understand this physically. P: 381 Thanks for the updated descriptions. You'd make a great PF science advisor someday. Say, in the magnetic moment of the electron with measured value of 1.00115965219 and calculated value of 1.0011596522 in the fourth term in the power series. Did they use Renormalization Group there? Maybe the reason the second term to fourth term didn't produce an infinite coupling constant is because the calculation replaces it with 1/137 as you described. What then is the original coupling constant when no Renormalization Group procedure was used? PF Gold P: 2,912 Quote by waterfall Thanks for the updated descriptions. You'd make a great PF science advisor someday. Say, in the magnetic moment of the electron with measured value of 1.00115965219 and calculated value of 1.0011596522 in the fourth term in the power series. Did they use Renormalization Group there? Maybe the reason the second term to fourth term didn't produce an infinite coupling constant is because the calculation replaces it with 1/137 as you described. What then is the original coupling constant when no Renormalization Group procedure was used? They used re-normalisation to calculate it - the re-normalisation group is simply a description of how quantities that are regulator (ie cutoff) dependant such as the coupling constant behave as the value of the regulator changes. Thanks Bill P: 381 note the interesting comments by Science Advisor tom.stoer that the Renormalization Group is just a DIRTY TRICK. Renormalization inorer to remove infinities is just a dirty trick. The renormalization group itself says that you can look at a theory at different energy scales E and E' (or at different length scales which is the same) and renromalize masses and coupling constants g(E) and g'(E') such that the predictions of both theories are the same. Essentially they are THE SAME theory. This can e.g. be interpreted as "integrating out" degrees of freedom and hiding their contribution in the renormalization of parameters. Have a look at http://en.wikipedia.org/wiki/Renorma...lization_group Renormalization in perturbative QFT is constantly teached to be "removing divergences". Unfortunately this is only a dirty trick! Renormalization (non-perturbative renormalization) is something totally different - and it may even appear in systems w/o any divergences. Have a look at http://en.wikipedia.org/wiki/Renormalization_group Suppose you have a theory with certain coupling constants ga, gb, ...; suppose you study this theory at some energy scale E. Then you will observe that you can re-express the theory in terms of different coupling constants g*a, g*b,... at a different energy scale E* (and different interactions, i.e. changing E may turn on new couplings). It is interesting that these two 'theories' defined at different energy scales E and E* are essentially the same theory! The way how to find the description at scale E* given the description at E is via studying trajectories ga(E), gb(E) in the space of all coupling constants. Different points on the same trajectory correspond to different scales E, E*, ... of the same theory; different theories are defined by points not connected by a trajectory. These trajectories define something called renormalization group flow. The renormalization group of a certain theory is defined via a set of coupled differential equation defining this flow of coupling constants. Perturbative QFT is defined by studying the flow near the point ga = gb = ... = 0; in some case (QCD) it can be shown that this is reasonable in certain regimes; in other cases (QG) there are strong indications that G=0 may never be a valid starting point, neither in the UV (not asymptotically free), nor in the IR (coupling constant of Newtonian gravity G is not zero). PF Gold P: 2,912 Quote by waterfall Bill Hobba kept saying this but he didn't explain the physical reason why The physical reason the coupling constant gets larger is the the shielding effect of the virtual particles around an electron. Landau showed that because of that for any finite charge at any distance the charge would measure zero. This created the famous zero charge problem. The only way around it is for the charge to actually be infinite and to grow to an infinite value the closer and closer you get to it - this is called the Landau pole. That is the reason for the infinities - the closer you get the higher the energy you must use. If I remember correctly this is not just theory at 90gv it measured 1/127 (OK to avoid egg on my face I checked it) As one other person posted the fact it blows up to infinity means the theory is sick and incorrect - but we already know that because it gets replaced by the electroweak theory which will probably also get replaced by something else someday and that something else will hopefully be free of these problems. Thanks Bill PF Gold P: 2,912 Quote by waterfall note the interesting comments by Science Advisor tom.stoer that the Renormalization Group is just a DIRTY TRICK. I do not agree with that. The effective field theory approach, as he sort of says, avoids this blowing up to infinity business: http://en.wikipedia.org/wiki/Effective_field_theory The only issue is theories like QED etc have a domain of applicability beyond which is does things like blow up to infinity - stick to that domain and everything is fine. Its hardly flabbergasting news and a DIRTY TRICK theories may not be valid to all energies. Thanks Bill P: 381 Quote by bhobba The physical reason the coupling constant gets larger is the the shielding effect of the virtual particles around an electron. Landau showed that because of that for any finite charge at any distance the charge would measure zero. This created the famous zero charge problem. The only way around it is for the charge to actually be infinite and to grow to an infinite value the closer and closer you get to it - this is called the Landau pole. That is the reason for the infinities - the closer you get the higher the energy you must use. If I remember correctly this is not just theory at 90gv it measured 1/127. As one other person posted the fact it blows up to infinity means the theory is sick and incorrect - but we already know that because it gets replaced by the electroweak theory which will probably also get replaced by something else someday and that something else will hopefully be free of these problems. Thanks Bill Thanks for the facts. Last question for this thread before I sit back and read all the papers as well as study deeper aspects of calculus with all this nice background in mind. My last question is this (I saw a similar question asked in the archives but unanswered). In the Dirac Equation. The magnetic moment of the electron is calculated as 1. In the 4th term in the power series, it's equal to 1.0011596522. The interacting fields are the electron self magnetic field and the electron. What about the interactions of say two electrons, what would be the Dirac Equation counterpart of 1.0 in the magnetic moment of the electron calculation? Do you calculate the dirac equation of each electron by adding them or calculate both of them combined? And if the result is for example 3.0. After the fourth term in the power series, would the result only be 3.0111 or would it be 5.0 (I don't think it would just be a small 3.0111 isn't it because the electric field strength is bigger). Thanks. PF Gold P: 2,912 Quote by waterfall In the Dirac Equation. The magnetic moment of the electron is calculated as 1. In the 4th term in the power series, it's equal to 1.0011596522. The interacting fields are the electron self magnetic field and the electron. What about the interactions of say two electrons, what would be the Dirac Equation counterpart of 1.0 in the magnetic moment of the electron calculation? Do you calculate the dirac equation of each electron by adding them or calculate both of them combined? And if the result is for example 3.0. After the fourth term in the power series, would the result only be 3.0111 or would it be 5.0 (I don't think it would just be a small 3.0111 isn't it because the electric field strength is bigger). Thanks. Don't know the answer to that one - sorry. Thanks Bill P: 381 Quote by bhobba Don't know the answer to that one - sorry. Thanks Bill Anyone else know? atyy? P: 1,943 Quote by Ken G Do you mean that the full integral has a closed-form expression (involving modified Bessel functions) for any value of lambda, but the series expression (involving Gamma functions) has terms that only converge absolutely when lambda<1? So if we had lambda>1, and all we had was the series form, we might worry the integral doesn't exist, when in fact it does? The series for this integral diverges for _any_ nonzero value of lambda. That's why it is called an asymptotic series.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644763231277466, "perplexity": 496.60201617289147}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131145.0/warc/CC-MAIN-20140914011211-00143-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://math.stackexchange.com/questions/197697/what-does-it-mean-when-elements-act-as-units-on-a-set
# What does it mean when elements act as units on a set? I'm reading Eisenbud's Commutative Algebra with a View Toward Algebraic Geometry, and I am confused by one of his proofs. The setup is that $R$ is a commutative ring, $U$ is a multiplicatively closed subset, and $\varphi$ is the natural map $\varphi: R \rightarrow R[U^{-1}]$ sending $r$ to $r/1$. He says that if an ideal $J \subset R$ is of the form $\varphi^{-1}(I)$, where $I \subset R[U^{-1}]$ is an ideal, then "since the elements of $U$ act as units on $R[U^{-1}]/I$, they act as nonzerodivisors on the $R$-submodule $R/J$." What does he mean by "acting as units on" and "acting as nonzerodivisors on"? - $R[U^{-1}]$ is an $R$-module through the map $\varphi.$ An element $u\in U$ acts on an element $r/s\in R[U^{-1}]$ by $u\cdot (r/s)=(u/1)(r/s)=ur/s,$ and $\varphi:R\to R[U^{-1}]$ is universal with respect to the property $\varphi(U)\subseteq R[U^{-1}]^\times$. Let $I\subseteq R[U^{-1}]$ be an ideal such that $J=\varphi^{-1}(I).$ Suppose that $u\cdot\overline r=\overline{ur}=0$ for some $\overline r\in R/J.$ Then $ur\in J,$ which implies that $\varphi(ur)=ur/1\in I.$ But $u/1$ is a unit, so $r/1\in I.$ Thus, we see that $r\in J,$ so $\overline r=0.$ Hence, multiplication by any $u\in U$ cannot kill nonzero elements of $R/J.$ - Let $\phi\colon R \rightarrow R[U^{-1}]/I$ be the canonical map. I think he meant that $\phi(u)$ is a unit, i.e. an invertible element of $R[U^{-1}]/I$ for every $u \in U$ when he said $U$ acts as units on $R[U^{-1}]/I$. $U$ acts as nonzerodivisors on $R/J$ means that $ux = 0$ implies $x = 0$ for $u \in U$ and $x \in R/J$. More generally, let $M$ be an $R$-module. Let $End_R(M)$ be the endomorphism ring. Let $\psi\colon R \rightarrow End_R(M)$ be the canonical map. Let $r \in R$. We say $r$ acts on $M$ as a unit when $\psi(r)$ is invertible in $End_R(M)$, i.e. $\psi(r)$ is bijective. We say $r$ acts on $M$ as a nonzerodivisor when $\psi(r)$ is injective, i.e. $rx = 0$ implies $x = 0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858571887016296, "perplexity": 75.53275915519927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116878.73/warc/CC-MAIN-20160428161516-00142-ip-10-239-7-51.ec2.internal.warc.gz"}
http://science.sciencemag.org/content/243/4892/763
Articles # Changing Composition of the Global Stratosphere See allHide authors and affiliations Science  10 Feb 1989: Vol. 243, Issue 4892, pp. 763-770 DOI: 10.1126/science.243.4892.763 ## Abstract The current understanding of stratospheric chemistry is reviewed with particular attention to the influence of human activity. Models are in good agreement with measurements for a variety of species in the mid-latitude stratosphere, with the possible exception of ozone (O3) at high altitude. Rates calculated for loss of O3 exceed rates for production by about 40 percent at 40 kilometers, indicating a possible but as yet unidentified source of high-altitude O3. The rapid loss of O3 beginning in the mid-1970s at low altitudes over Antarctica in the spring is due primarily to catalytic cycles involving halogen radicals. Reactions on surfaces of polar stratospheric clouds play an important role in regulating the abundance of these radicals. Similar effects could occur in northern polar regions and in cold regions of the tropics. It is argued that the Antarctic phenomenon is likely to persist: prompt drastic reduction in the emission of industrial halocarbons is required if the damage to stratospheric O3 is to be reversed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653609156608582, "perplexity": 2134.791320986797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00389.warc.gz"}
http://grockit.com/blog/gmat/2012/08/13/inside-the-sausage-factory-sentence-correction-2-of-6/
### Grockit GMAT Prep #### Inside the Sausage Factory: Sentence Correction (2 of 6) Last time, we talked about how new questions are born (“You see, son, when a writer and an Official Guide love each other very much . . .” ) outside of GMAC’s underground bunker.  Questions are made of a variety of topics, and those topics are tested not only in the original question, but in the answer choices.  Perhaps this is obvious, but the easiest way to make new questions is to follow the example exactly.  Thus, from our example question (OG 13, pg. 672, #2) we can generate the following questions that have a certain, well, familiar sound to them: Companies label light bulbs in lumens; if they label the lumens higher, the more visible light will be radiated by the bulb. A. if they label the lumens higher, the more B. labelling the lumens higher, it is that much more C. the higher the lumens on the label, the more D. the higher the lumens on the label, it is that much more that E. when the lumens on the label are higher, the more it is Seismologists measure earthquakes on the Richter Scale; if they give it a higher rating, the more intense the earthquake was. A. if they give it a higher rating, the more intense B. rating the earthquake higher, it is that much more intense C. the higher they rate the earthquake, the more intense D. the higher they rate the earthquake, it is that much more intense that E. when the Richter Scale rating is higher, the more intense Very cold temperatures are measured in degrees Kelvin; if they give it a low number, the closer the temperature is to absolute zero. A.  if they give it a low number, the closer B.  when the number is lower, the closer C.  the lower the number, it is that much closer D.  giving it a low number, it is that much closer E.  the lower the number, the closer Typographers measure font sizes in points; the higher the points, the larger the letters in that font. A.  the higher the points, the larger the letters B.  giving it higher points, it is that much larger C.  when the points are higher, the larger the letters D.  if they give it high points, the larger the letters E.  the higher the points, it is that much larger GMAT aspirants often value this type of authenticity; they know that the questions they’re seeing adhere closely to GMAC’s standards, and thus gravitate toward the familiar.  The problem is perhaps obvious to you if you worked your way through the questions above:  they get easier, just as the OG itself gets easier the more times you run through it.  You’re answering the same question repeatedly – I’d hope you’d get better at it!  I didn’t even change the order of the answer choices in the first two. Next time we’ll explore how to make them more difficult!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183840155601501, "perplexity": 2686.5654161047673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706762669/warc/CC-MAIN-20130516121922-00040-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/geometric-description-of-the-free-em-field.692841/
# Geometric description of the free EM field 1. May 20, 2013 ### dextercioby I'm not a geometer, so I beg for indulgence on the below: In a modern geometrical description of electromagnetism (either in flat or in curved space-time*), I see at least 3 (or 4) (fiber) bundles over the 4D space-time taken to be the base space: * 1 the cotangent bundle and the bundle of p-forms over space time (de Rham complex). Differential operator: d Here the EM field appears as: the potential is a 1 form field, the field strength is a 2 form field. F=dA. * 2 the (principal?)$SO_{\uparrow}(1,3)$ bundle of tensors over space time. Here if the group is replaced by SL(2,C), we get the spinor bundle over space time, a concept described in the 13th chapter of Wald's book - in the context of a curved space time. Here I imagine A and F as covariant and double covariant tensors. Differential operator = $\partial_{\mu}$ or $\nabla_{\mu}$. * 3 the so-called U(1) gauge bundle over space time (principal/associated bundle ?) , with F and A having particular descriptions. Here we have another differential operator d-bar, such as F = d-bar A. Questions: a) How are 1,2,3 exactly related ? b) Is d-bar at 3 related to d at 1 ? I suspect yes. How can it be proven ? The operator at 2, how's it related to d-bar at 3 or to d at 1 ? Thank you! Last edited: May 21, 2013 2. May 21, 2013 ### fzero 2 is the frame bundle of 1. As the discussion there describes, the bundle of forms is an example of an vector bundle associated to a principal G-bundle. As for 3, any G-bundle has associated vector bundles associated with the representations of G, see here. We could more accurately claim that the gauge fields $A_\mu$ is a section of a bundle which is itself a product of a principal $U(1)$ bundle with $T^*(X)$. While it's a bit transparent here, we can say that $T^*(M)$ is the associated vector bundle for the trivial $U(1)$ representation. 3. May 21, 2013 ### dextercioby Hi Fzero, thank you for the answer. One more question, though. Wiki is fine for definitions (as any encyclopedia), but can't be used as a textbook. So which sources (as in books) did you use (in college or in research/job after college) to acquire in your mind a valid geometric description of field theory ? And d-bar of mine above is the same as d in 1, thus the differentiation is unique ? 4. May 21, 2013 ### fzero Nakahara, Geometry, Topology, and Physics is decent. If you want something a bit more rigorous, Choquet-Bruhat and DeWitt-Morette, Analysis, Manifolds and Physics is a 2-vol set, though I think the basics are covered in the first volume. AMP has quite a bit more functional analysis than a typical geometry book, since part of their focus was to put various physics methods in a rigorous mathematical framework. If you want a reference tailored to the mathematics alone, Kobayashi and Nomizu, Foundations of Differential Geometry is pretty accessible, if I recall correctly. We can be more specific. If $G$ were non-Abelian, then the gauge fields are $\mathrm{ad}~G$ valued 1-forms, so we can say that they live in $\omega^1(X, \mathrm{ad}~G)$. There is a covariant exterior derivative $d_A$ on this space that extends from the ordinary exterior derivative on $\omega^1(X)$. In your case the $U(1)$ bundle has a trivial connection so we just have the ordinary derivative. 5. May 21, 2013 ### dextercioby Thank you for the reccomendations, I'll get hold of them. One more question. There's a 4th bundle in classical field theory: the jet bundle. Where do they come in and how are they related to 1->3 ? Thanks! 6. May 21, 2013 ### fzero I've heard the term (probably in the context of geometric quantization), but I'm not really familiar with jet bundles. What I can gather from the wiki description is that, given a bundle $E\rightarrow X$ with some sections $\sigma$, we can construct jet bundles in such a way that the sections of the jet bundle are the vectors $(\sigma, \partial^1 \sigma, \ldots, \partial^r \sigma)$, where $\partial^i \sigma$ is the ith derivative with respect to the coordinates on $X$. There's an example on the wiki where they explain that the 1st jet bundle is diffeomorphic to $\mathbb{R} \times T^*(X)$. The first term corresponds to the function, while the 1-form corresponds to its derivative. I would say that the function should more properly be thought of as a section of an $\mathbb{R}$ bundle over $X$, but since this is trivial, I suppose that the claimed product structure is correct. It seems that this structure is useful to some people because the data is precisely what you want to define a Lagrangian functional. I've just never had a reason to learn about any of this. Similar Discussions: Geometric description of the free EM field
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440701603889465, "perplexity": 463.58756548139576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806317.75/warc/CC-MAIN-20171121055145-20171121075145-00428.warc.gz"}
http://mathhelpforum.com/differential-equations/80806-wave-equation-using-seperation-variables.html
# Math Help - Wave Equation using seperation of variables 1. ## Wave Equation using seperation of variables Hello i was wondering if someone could help me out and show me how to solve the following heat wave problem.We dont have a book for this class and i couldnt make it to my last class so i dont knoow how to go about this question Solve the wave equation using seperation of variables and show thte the solution reduces to D'Alembert's solution $\mu_{tt} = c^2\mu_{xx}$ $\mu(0,t) = 0$ $\mu(L,t) = 0$ $\mu(x,0) = f(x)$ $\mu_t(x,0) = 0$ 2. Originally Posted by flaming Hello i was wondering if someone could help me out and show me how to solve the following heat wave problem.We dont have a book for this class and i couldnt make it to my last class so i dont knoow how to go about this question Solve the wave equation using seperation of variables and show thte the solution reduces to D'Alembert's solution $\mu_{tt} = c^2\mu_{xx}$ $\mu(0,t) = 0$ $\mu(L,t) = 0$ $\mu(x,0) = f(x)$ $\mu_t(x,0) = 0$ I would feel better if you would at least show that you know what "separation of variables" is! Assume $\mu(x,t)= X(x)T(t)$. Then $\mu_{tt}= XT^{''}$ and $\mu_{xx}= X^{''} T$ so the equation becomes $XT^{''} = c^2X^{''} T$. Dividing on both sides of the equation by XT gives $\frac{T^{''} }{T}= c^2\frac{X^{''} }{X}$. Since the left side depends only on t and the right side only on x, to be equal they must each equal a constant. That gives two ordinary equations: $c^2\frac{X^{''} }{X}= \lambda$ or $c^2X^{''} = \lambda X$ and $\frac{T^{''} }{T}= \lambda$ or $T^{''} = \lambda T$. You should be able to find the general solution to each of those, depending on $\lambda$ of course. What must $\lambda$ be in order that X(0)= 0 and X(L)= 0? I have a feeling I have repeated what your text book says!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001469373703003, "perplexity": 284.8083608829235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00155-ip-10-146-231-18.ec2.internal.warc.gz"}
https://nrich.maths.org/1947/index?nomenu=1
$ABCD$ is a square of side 1 unit. Arc of circles with centres at $A, B, C, D$ are drawn in. Prove that the area of the central region bounded by the four arcs is: $(1 + \pi/3 + \sqrt{3})$ square units.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390455842018127, "perplexity": 142.07572471641333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00611.warc.gz"}
https://www.physicsforums.com/threads/potential-energy-and-bernoulli.52652/
Potential Energy and Bernoulli 1. Nov 14, 2004 Skomatth $$P_1 + \frac{1}{2} \rho (v_1)^2 + \rho g h_1 = P_2 + \frac{1}{2} \rho (v_2)^2 + \rho g h_2$$ In this equation (and regular energy equations for that matter) is g= 9.8 or -9.8 m/s^2 ? To make sense mathematically I believe it has to be 9.8 or else pressure and velocity would increase as a fluid increases its height. I think my textbook needs to define when g is negative and when it is positive can get confusing sometimes. In kinematic equations you can pick a reference frame and set it positive or negative yourself but in energy equations it can get confusing. For example $$W_{gravity} = - \Delta PE$$ and also $$\Delta PE = mg \Delta y$$ . If it weren't for my teacher showing me that work was the magnitude of F X magnitude of distance X cosine of lesser included angle (which my book neglects to mention) I'd be completely confused. 2. Nov 14, 2004 Staff: Mentor In common usage g is always the magnitude of the acceleration due to gravity; it is a positive quantity (e.g., 9.8 m/s^2). Thus the acceleration due to gravity is g, downward. Similar Discussions: Potential Energy and Bernoulli
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785029292106628, "perplexity": 696.0562959550326}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00629-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.math-only-math.com/comparison-of-integers.html
Comparison of Integers Comparison of integers: When we represent integers on the number line, we observe that the value of the number increases as we move towards right and decreases as we move towards left. 1 < 2 < 3 ….. -1 > -2 > …… The whole numbers are on the right side of the 0 and in the left side of the 0 there are negative numbers. Note: (i) Zero is less than every positive integer, and greater than every negative integer. Zero is neither positive nor negative. For example, 0 < 1, 0 < 10, etc. Also 0 > -1, 0 > -5, etc. (ii) Every positive integer is greater than every negative integer. For example, 2 > -2, 2 > -1, 1 > -1 etc. (iii) There is no greatest or smallest integer. (iv) The smallest positive integer is 1 and the greatest negative integer is -1. Numbers - Integers Integers Multiplication of Integers Properties of Multiplication of Integers Examples on Multiplication of Integers Division of Integers Absolute Value of an Integer Comparison of Integers Properties of Division of Integers Examples on Division of Integers Fundamental Operation Examples on Fundamental Operations Uses of Brackets Removal of Brackets Examples on Simplification Numbers - Worksheets Worksheet on Multiplication of Integers Worksheet on Division of Integers Worksheet on Fundamental Operation Worksheet on Simplification
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263599514961243, "perplexity": 1393.8883283609912}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00470.warc.gz"}
http://www.cfd-online.com/Forums/main/123635-isothermal-fluid-flow-energy-equation.html
# Isothermal fluid flow and the Energy Equation Register Blogs Members List Search Today's Posts Mark Forums Read September 17, 2013, 15:18 Isothermal fluid flow and the Energy Equation #1 New Member   Kay Join Date: Sep 2013 Posts: 2 Rep Power: 0 Hi all, I'm taking an introductory CFD course, and was hoping for a better explanation than my professor gave for a question I had during class: When fluid flow is treated as incompressible and isothermal, the energy equation does not need to be solved. I get that this is the case, but what is the reason behind this/why is this the case? (constant temp = constant internal energy? or am I missing a few steps?) On a related note, what are the major assumptions that we make in this case to reduce the complete set of NS equations to the NS equations for the case for incompressible, isothermal flow? (constant density, anything else?) September 17, 2013, 15:40 #2 Senior Member Filippo Maria Denaro Join Date: Jul 2010 Posts: 1,998 Rep Power: 26 Quote: Originally Posted by ksq Hi all, I'm taking an introductory CFD course, and was hoping for a better explanation than my professor gave for a question I had during class: When fluid flow is treated as incompressible and isothermal, the energy equation does not need to be solved. I get that this is the case, but what is the reason behind this/why is this the case? (constant temp = constant internal energy? or am I missing a few steps?) On a related note, what are the major assumptions that we make in this case to reduce the complete set of NS equations to the NS equations for the case for incompressible, isothermal flow? (constant density, anything else?) For incompressible flows, homogeneous density and isothermal condition, the NS equations reduce to Div V =0 DV/Dt + Grad p = ni* Lap V that is a set of two equations in two unknowns. The problem is well posed by simply specifying initial and bc.s conditions, you can see that pressure is not a thermodinamic function. The temperature field (i.e., the energy equation) is totally decoupled from the dynamic of the flow. That can be found in any book of fluid mechanics September 17, 2013, 15:41 #3 New Member   Kay Join Date: Sep 2013 Posts: 2 Rep Power: 0 The 4 CFD books I looked into just said that it was the case to not solve the energy equation under isothermal conditions, not WHY it was the case. Thanks for your help! September 17, 2013, 17:05 #4 New Member   Join Date: Sep 2013 Posts: 7 Rep Power: 4 if you consider the NS equations in the differential form for isothermal and incompressible conditions the derivatives of all temperature and densities equal zero. doing that you can easily derive it yourself. September 17, 2013, 17:45 #5 Senior Member Filippo Maria Denaro Join Date: Jul 2010 Posts: 1,998 Rep Power: 26 Quote: Originally Posted by Jochen if you consider the NS equations in the differential form for isothermal and incompressible conditions the derivatives of all temperature and densities equal zero. doing that you can easily derive it yourself. Maybe, some difference in definitions can confuse me... I assume the suffix "iso" as the same as used for iso-entropic flows. That means that a quantity is constant along trajectory. Thus, for isothermal flows, I assume something like dT/dt+u*dT/dx=0, not necessarily to have all the derivatives zero. I would say the latter case as omo-thermal... September 18, 2013, 04:28 #6 Senior Member Join Date: Dec 2011 Posts: 133 Rep Power: 7 Quote: Originally Posted by ksq The 4 CFD books I looked into just said that it was the case to not solve the energy equation under isothermal conditions, not WHY it was the case. Thanks for your help! Hi, here's my attempt at explaining it: 1) In a compressible flow, the local density of the fluid depends both on pressure and temperature. The mass, momentum and energy conservation equations are coupled (density appears in all of them!) and therefore they must be solved simultaneously. Pressure is usually calculated from temperature and density using an appropriate equation of state (ideal gas for instance). 2) If you assume incompressible flow (think of a liquid, or even a gas but flowing at a low Mach number) density is constant. The velocity field does not depend anymore on the temperature distribution (since temperature variations do not induce density changes, and also neglecting fluid viscosity dependence on temperature). Therefore, the momentum equation can be solved separately, where pressure is such that it ensures continuity (mass conservation, ). The energy (temperature) equation can be solved for a posteriori using the solved velocity field to obtain the temperature distribution, even though if your fluid is also assumed to be isothermal then there's no need (T=Tfluid everywhere). Cheers, Michujo. Last edited by michujo; September 18, 2013 at 05:34. September 18, 2013, 06:27 #7 New Member Join Date: Sep 2013 Posts: 7 Rep Power: 4 Quote: Originally Posted by FMDenaro Maybe, some difference in definitions can confuse me... I assume the suffix "iso" as the same as used for iso-entropic flows. That means that a quantity is constant along trajectory. Thus, for isothermal flows, I assume something like dT/dt+u*dT/dx=0, not necessarily to have all the derivatives zero. I would say the latter case as omo-thermal... It is my understanding that isothermal means T=constant everywhere. never heard of omothermal though Thread Tools Display Modes Linear Mode Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is OffTrackbacks are On Pingbacks are On Refbacks are On Forum Rules All times are GMT -4. The time now is 15:45.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788357377052307, "perplexity": 1279.9863206166324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147492.21/warc/CC-MAIN-20160205193907-00243-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0001-37652017000100019&lng=pt&nrm=iso&tlng=en
## versão impressa ISSN 0001-3765versão On-line ISSN 1678-2690 ### An. Acad. Bras. Ciênc. vol.89 no.1 Rio de Janeiro jan./mar. 2017 #### http://dx.doi.org/10.1590/0001-3765201720150382 Mathematical Sciences On topological groups with an approximate fixed point property 1Departamento de Matemática, Universidade Federal do Ceará, Campus do Pici, Bl. 914, 60455-760 Fortaleza, CE, Brazil 2Departamento de Matemática, Instituto de Matemática e Estatística, Universidade de São Paulo, Rua do Matão, 1010, 05508-090 São Paulo, SP, Brazil 3Departamento de Matemática, Universidade Federal de Santa Catarina, Trindade, 88040-900 Florianópolis, SC, Brazil 4Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON, K1N 6N5, Canada Abstract A topological group G has the Approximate Fixed Point (AFP) property on a bounded convex subset C of a locally convex space if every continuous affine action of G on C admits a net (xi) , xiC , such that xi-gxi0 for all gG . In this work, we study the relationship between this property and amenability. Key words Amenable groups; approximate fixed point property; Følner property; Reiter property INTRODUCTION One of the most useful known characterizations of amenability is stated in terms of a fixed point property. A classical theorem of (Day 1961) says that a topological group G is amenable if and only if every continuous affine action of G on a compact convex subset C of a locally convex space has a fixed point, that is, a point xC with gx=x for all gG . This result generalizes earlier theorems of (Kakutani 1938) and (Markov 1936) obtained for abelian acting groups. At the same time, an active branch of current research is devoted to the existence of approximate fixed points for single maps. Basically, given a bounded, closed convex set C and a map f:CC , one wants to find a sequence (xn)C such that xn-f(xn)0 . A sequence with this property will be called an approximate fixed point sequence. The main motivation for this topic is purely mathematical and comes from several instances of the failure of the fixed point property in convex sets that are no longer assumed to be compact, cf. (Dobrowolski and Marciszewski 1997, Edelstein and Kok-Keong 1994, Floret 1980, Klee 1955, Nowak 1979) and references therein. One of the most emblematic results on this matter states that if C is a non-compact, bounded, closed convex subset of a normed space, then there exists a Lipschitz map f:CC such that infxCx-f(x)>0 (Lin and Sternfeld 1985). Notice in this case that there is no approximate fixed point sequence for f . Previous results of topological flavour were discovered by many authors, including (Klee 1955) who has characterized the fixed point property in terms of compactness in the framework of metrizable locally convex spaces. Both results give rise to the natural question whether a given spacewithout the fixed point property might still have the approximate fixed point property. The firstthoughts on this subject were developed in (Barroso 2009, Barroso and Lin 2010, Barroso et al. 2012, 2013) in the context of weak topologies. Another mathematical motivation for the study of theapproximate fixed point property is the following open question. Question 0.1. Let X be a Hausdorff locally convex space. Assume CX is a sequentially compact convex set and f:CC is a sequentially continuous map. Does f have a fixed point? So far, the best answers for this question were delivered in (Barroso et al. 2012, 2013). Let us just summarize the results. Theorem 0.2 (Theorem 2.1, Proposition 2.5(i) in (Barroso et al. 2012, 2013)). Let X be a topological vector space, CX a nonempty convex set, and let f:CC¯ be a map. • 1. If C is bounded, f(C)C and f is affine, then f has an approximate fixed point sequence. • 2. If X is locally convex and f is sequentially continuous with totally bounded range, then 0{x-f(x):xC}¯ . And indeed, f has a fixed point provided that X is metrizable. • 3. If C is bounded and f is τ -to- σ(X,Z) sequentially continuous, then 0{x-f(x):xC}¯σ(X,Z) , where τ is the original topology of X and Z is a subspace of the topological dual X . And, moreover, f has a σ(X,Z) -approximate fixed point sequence provided that Z is separable under the strong topology induced by X . The idea of approximate fixed points is an old one. Apparently the first result on this kind was exploited in (Scarf 1967), where a constructive method for computing fixed points of continuous mappings of a simplex into itself was described. Other important works along these lines can be found in Hazewinkel and Vel ( 1978), Hadžić ( 1996), Idzik ( 1988), Park ( 1972). Approximate fixed point property has a lot of applications in many interesting problems. In (Kalenda 2011) it is proved that a Banach space has the weak-approximate fixed point property if and only if it does not contain any isomorphic copy of 1 . As another instance, it can be used to study the existence of limiting-weak solutions for differential equations in reflexive Banach spaces (Barroso 2009). In this note, we study the existence of common approximate fixed points for a set of transformations forming a topological group. Not surprisingly, the approximate fixed point property for an acting group G is also closely related to amenability of G , however, the relationship appears to be more complex. There is an extremely broad variety of known definitions of amenability of a topological group, which are typically equivalent in the context of locally compact groups yet may diverge beyond this class. One would expect the approximate fixed point property (or rather ‘‘properties,’’ for they depend on the class of convex sets allowed) to provide a new definition of amenability for some class of groups, and delineate a new class of topological groups in more general contexts. This indeed turns out to be the case. We show that a discrete group G is amenable if and only if every continuous affine action of G on a bounded convex subset C of a locally convex space (LCS) X admits approximate fixed points. For a locally compact group, a similar result holds if we consider actions on bounded convex sets C which are complete in the additive uniformity, while in general we can only prove that G admits weakly approximate fixed points. This criterion of amenability is no longer true in the more general case of a Polish group, even if amenability of Polish groups can be expressed in terms of the approximate fixed point property on bounded convex subsets of the Hilbert space. We view our investigation as only the first step in this direction, and so we close the article with a discussion of open problems for further research. 1 - AMENABILITY Here is a brief reminder of some facts about amenable topological groups. For a more detailed treatment, see e.g. (Paterson 1988). All the topologies considered here are assumed to be Hausdorff. Let G be a topological group. The right uniform structure on G has as basic entourages of the diagonal the sets of the form UV={(g,h)G×Ghg-1V}, where V is a neighbourhood of the identity e in G . This structure is invariant under right translations. Accordingly, a function f:G is right uniformly continuous if for all ε>0 , there exists a neighbourhood V of e in G such that xy-1V implies |f(x)-f(y)|<ε for every x,yG . Let RUCB(G) denote the space of all right uniformly continuous functions equipped with the uniform norm. The group G acts on RUCB(G) on the left continuously by isometries: for all gG and fRUCB(G),gf=gf where fg(x)=f(g-1x) for all xG . Definition 1.1. A topological group G is amenable if it admits an invariant mean on RUCB(G) , that is, a positive linear functional m with m(1)=1 , invariant under the left translations. Examples of such groups include finite groups, solvable topological groups (including nilpotent, in particular abelian topological groups) and compact groups. Here are some more examples: 1. The unitary group 𝒰(2) , equipped with the strong operator topology (Harpe 1973). 2. The infinite symmetric group S with its unique Polish topology. 3. The group 𝒥(k) of all formal power series in a variable x that have the form f(x)=x+α1x2+α2x3+.,αnk , where k is a commutative unital ring (Babenko and Bogatyi 2011). Let us also mention some examples of non-amenable groups: 1. The free discrete group 𝔽2 of two generators. More generally, every locally compact group containing 𝔽2 as a closed subgroup. 2. The unitary group 𝒰(2) , with the uniform operator topology (Harpe 1979). 3. The group Aut(X,μ) of all measure-preserving automorphisms of a standard Borel measure space (X,μ) , with the uniform topology, i.e. the topology determined by the metric d(τ,σ)=μ{xX:τ(x)σ(x)} (Giordano and Pestov 2002). The following is one of the main criteria of amenability in the locally compact case. Theorem 1.2 (Følner’s condition). Let G be a locally compact group and denote λ the left invariant Haar measure. Then G is amenable if and only if G satisfies the Følner condition: for every compact set FG and ε>0 , there is a Borel set UG of positive finite Haar measure λ(U) such that λ(xUU)λ(U)<ε for each xF . Recall that a Polish group is a topological group whose topology is Polish, i.e., separable and completely metrizable. Proposition 1.3 (See e.g. (Al-Gadid et al. 2011), Proposition 3.7). A Polish group G is amenable if and only if every continuous affine action of G on a convex, compact and metrizable subset K of a locally convex space X admits a fixed point. For a most interesting recent survey about the history of amenable groups, see (Grigorchuk and de la Harpe in press.). 2 - GROUPS WITH APPROXIMATE FIXED POINT PROPERTY Definition 2.1. Let C be a convex bounded subset of a topological vector space X . Say that a topological group G has the approximate fixed point (AFP) property on C if every continuous affine action of G on C admits an approximate fixed point net, that is, a net (xi)C such that for every gG,xi-gxi0 . We will analyse the AFP property of various classes of amenable topological groups. 2.1 - CASE OF DISCRETE GROUPS Theorem 2.2. The following properties are equivalent for a discrete group G : • 1. G is amenable, • 2. G has the AFP property on every convex bounded subset of a locally convex space. Proof.(1) (2). Let G a discrete amenable group acting by continuous affine maps on a bounded convex subset C of a locally convex space X . Choose a Følner’s (Φi)iI net, that is, a net of finite subsets of G such that |gΦiΦi||Φi|0 gG. Now, let γG , fix xC and define xi=1|Φi|gΦigx . Since C is convex, xiC for all iI . Notice that |ΦiγΦi|=|γΦiΦi|=12|γΦiΦi| for all iI . Therefore we have xi-γxi=1|Φi|[gΦigx-gΦiγgx]=1|Φi|[gΦigx-hγΦihx]=1|Φi|[g(ΦiγΦi)gx-g(γΦiΦi)gx]=|γΦiΦi|2|Φi|[1|ΦiγΦi|g(ΦiγΦi)gx-1|γΦiΦi|g(γΦiΦi)gx] Thus xi-γxi|γΦiΦi|2|Φi|(C-C) and hence xi-γxi0 since C is bounded. (2) (1). Let G be a discrete group acting continuously and by affine transformations on a nonempty compact and convex set K in a locally convex space X . By hypothesis, there is a net (xi)K such that gG,xi-gxi0 . By compactness of K , this net has accumulation points in K . Since gG,xi-gxi0 , this insures invariance of accumulation points and shows the existence of a fixed point in K . Therefore G is an amenable group by Day’s fixed point theorem mentioned in the Introduction. ∎ 2.2 - CASE OF LOCALLY COMPACT GROUPS Recall from (Bourbaki 1963) the following notion of integration of functions with range in a locally convex space. Let F be a locally convex vector space on or . F denotes the dual space of F and F the algebraic dual of F . We identify as usual F (seen as a vector space without topology) with a subspace of F by associating to any zF the linear form Fzz,z . Let T be a locally compact space and let μ a positive measure on T . A map f:TF is essentially μ -integrable if for every element zF,z,f is essentially μ -integrable. If f:TF is essentially μ -integrable, then zTz,fdμ is a linear map on F , i.e. an element of F . The integral of f is the element of F denoted Tfdμ and defined by the condition: z,Tfdμ=Tz,fdμ for every zF . Note that, in general we don’t have TfdμF . But we have the following. Proposition 2.3 ((Bourbaki 1963), chap. 3, Proposition 7). Let T be a locally compact space, E a locally convex space and f:TE a function with compact support. If f(T) is contained in a complete (with regard to the additive uniformity) convex subset of E , then Tf𝑑μE . Theorem 2.4. The following are equivalent for a locally compact group G : • 1. G is amenable, • 2. G has the AFP property on every complete, convex, and bounded subset of a locally convex space. Proof. (1) (2). Let G be a locally compact amenable group acting continuously by affine maps on a complete, bounded, convex subset C of a locally convex space X . Again, select a Følner net (Fi)iI of compact subsets of G such that λ(gFiFi)λ(Fi)0 gG . Fix xC and let ηx:GggxC be the corresponding orbit map. Define xi=1λ(Fi)Fiηx(g)dλ(g) . By the above, this is an element of C ; the barycenter of the push-forward measure (ηx)(λ|Fi) on X . We have, just like in the discrete case: xi-γxi=1λ(Fi)[Fiηx(g)dλ(g)-γFiηx(g)dλ(g)]=1λ(Fi)[Fiηx(g)dλ(g)-Fiηx(γg)dλ(g)]=1λ(Fi)[Fi[ηx(g)-ηx(γg)]dλ(g)]=1λ(Fi)[γFiFiηx(v)dλ(v)]. Now, let q be any continuous seminorm on C . We have: q(xi-γxi)λ(γFiFi)λ(Fi)K where K=supvGqηx(v)< since C is bounded. Thus xi-γxi0 . (2) (1). Same argument as in the case of discrete groups. ∎ The assumption of completeness of C does not look natural in the context of approximate fixed points, but we do not know if it can be removed. It depends on the answer to the following. Question 2.5. Let f be an affine homeomorphism of a bounded convex subset C of a locally convex space X . Can f be extended to a continuous map (hence, a homeomorphism) of the closure of C in X ? Nevertheless, we can prove the following. Theorem 2.6. Every amenable locally compact group G has a weak approximate fixed point property on each bounded convex subset C of a locally convex space X . Proof. In the notation of the proof of Theorem 2.4, let μi=(ηx)(λ|Fi) denote the push-forward of the measure λi=λFi along the orbit map ηx:GggxC . Let xi be the barycenter of μi . This time, xi need not belong to C itself, but will belong to the completion C^ , of C (the closure of C in the locally convex vector space completion X^ ). For every gG , denote zig the barycenter of the measure g.μi=(ηx)(gλi) . Just like in the proof of Theorem 2.4., for every g we have xi-zig0 . Now select a net νj of measures with finite support on G , converging to λi in the vague topology (Bourbaki 1963). Denote yj the barycenter of the push-forward measure (ηx)(νj) . Then yjxi in the vague topology on the space of finite measures on the compact space Fi.x . Clearly, yjC , and so gyj is well-defined and gyjzig for every gΦ . It follows that gyj-yj weakly converges to 0 for every gΦ . ∎ Remark 2.7. Clearly the weak AFP property on each bounded convex subset C implies amenability of G as well. Recall that a topological group G has the weak AFP property on C if every weakly continuous affine action of G on C admits an approximate fixed point net. 2.3 - CASE OF POLISH GROUPS The above criteria do not generalize beyond the locally compact case in the ways one might expect: not every amenable non-locally compact Polish group has the AFP property, even on a bounded convex subset of a Banach space. Proposition 2.8. The infinite symmetric group S equipped with its natural Polish topology does not have the AFP property on closed convex bounded subsets of 1 . If we think of S as the group of all self-bijections of the natural numbers , then the natural (and only) Polish topology on S is induced from the embedding of S into the Tychonoff power , where carried the discrete topology. We will use the following well-known criterion of amenability for locally compact groups. Theorem 2.9 (Reiter’s condition). Let p be any real number with 1p< . A locally compact group G is amenable if and only if for any compact set CG and ε>0 , there exists fLp(G) , f0 , fp=1 , such that: gf-f<ε for all gC . Proof of Proposition 2.8. Denote prob() the set of all Borel probability measures on , in other words, the set of positive functions b:[0,1] such that nb(n)=1 . This is the intersection of the unit sphere of 1 with the cone of positive elements, a closed convex bounded subset of 1 . The Polish group S acts canonically on 1 by permuting the coordinates: S×1()(σ,(xn)n)σ(xn)n=(xσ-1(n))n1(). Clearly, prob() is invariant and the restricted action is affine and continuous. We will show that the action of S on prob() admits no approximate fixed point sequence. Assume the contrary. Make the free group 𝔽2 act on itself by left multiplication and identify 𝔽2 with . In this way we embed 𝔽2 into S as a closed discrete subgroup. This means that the action of 𝔽2 by left regular representation on prob()prob(𝔽2) also has almost fixed points, and 1(𝔽2) , with regard to the left regular representation of 𝔽2 , has almost invariant vectors. But this is the Reiter’s condition ( p=1 ) for 𝔽2 , a contradiction with non-amenability of this group. ∎ However, it is still possible to characterize amenability of Polish groups in terms of the AFP property. Theorem 2.10. The following are equivalent for a Polish group G : • 1. G is amenable, • 2. G has the AFP property on every bounded, closed and convex subset of the Hilbert space. Proof. (1) (2). It is enough to show that a norm-continuous affine action of G on a bounded closed convex subset C of 2 is continuous with regard to the weak topology, because then there will be a fixed point in C by Day’s theorem. Let xC and gG be any, and let V be a weak neighbourhood of g.x in C . The weak topology on the weakly compact set C coincides with the σ(spanC,2) topology, hence one can choose x1,x2,,xnC and ε>0 so that yV whenever |xi,y-gx|<ε for all i . Denote K the diameter of C . Because the action is norm-continuous, we can find Ue in G so that u-1xi-xi<ε/2K for all i . The set Ug is a neighbourhood of g in G . As a weak neighbourhood of x , take the set W formed by all ζC with |g-1xi,ζ-x|<ε/2 for all i . Equivalently, the condition on ζ can be stated |xi,gζ-gx|<ε/2 for all i . If now uU and ζW , one has |⟨xi,(u⁢g)⋅ζ-g⁢x⟩| = |⟨u-1⁢xi,g⁢ζ⟩-⟨xi,g⁢x⟩| = |⟨u-1⁢xi,g⁢ζ⟩-⟨xi,g⁢ζ⟩|+|⟨xi,g⁢ζ⟩-⟨xi,g⁢x⟩| ≤ ∥u⁢xi-xi∥⋅K+ε2 = ε. This shows that (ug)ζV , and so the action of G on C is continuous with regard to the weak topology. (2) (1). Suppose that G acts continuously and by affine transformations on a compact convex and metrizable subset Q of a LCS E . If C(Q) is equipped with the usual norm topology, then G acts continuously by affine transformations on the subspace A(Q) of C(Q) consisting of affine continuous functions on Q . Since Q is a metrizable compact set, the space C(Q) is separable, so is the space A(Q) . Fix a dense countable subgroup H of G , and let F={fn:n} be a dense subset of the closed unit ball of A(Q) which is H -invariant. The map T:Qx(1nfn(x))2 is an affine homeomorphism of Q onto a convex compact subset of 2 . The subgroup H acts continuously and by affine transformations on the affine topological copy T(Q) of Q by, the obvious rule hT(x)=T(hx) . The action of H extends by continuity to a continuous affine action of G on T(Q) . By hypothesis, G admits an approximative fixed point sequence in T(Q) , and every accumulation point of this sequence is a fixed point since Q is compact. Therefore G is amenable. 3 - DISCUSSION AND CONCLUSION 3.1 - APPROXIMATELY FIXED SEQUENCES As we have already noted, we do not know if every locally compact amenable group has the AFP property on all convex bounded subsets of locally convex spaces. Another interesting problem is to determine when does an acting group possess not merely an approximately fixed net, but an approximate fixed sequence. Recall that a topological group G is σ -compact if it is a union of countably many compact subsets. It is easy to see that if an amenable locally compact group G is σ -compact, then it admits an approximate fixed sequence for every continuous action by affine maps on a closed bounded convex set. Question 3.1. Let G be a metrizable separable group acting continuously and affinely on a convex bounded subset C of a metrizable and locally convex space. If the action has an approximate fixed net, does there necessarily exist an approximate fixed sequence? This is the case, for example if G is the union of a countable chain of amenable locally compact (in particular, compact) subgroups, and the convex set C is complete. Recall in this connection that amenability (and thus Day’s fixed point property) is preserved by passing to the completion of a topological group. At the same time, the AFP property is not preserved by completions. Indeed, the group Sfin of all permutations of integers with finite support is amenable as a discrete group, and so, equipped with any group topology, will have the AFP property on every bounded convex subset of a locally convex space. However, its completion with regard to the pointwise topology is the Polish group S which, as we have seen, fails the AFP property on a bounded convex subset of 1 . Question 3.2. Does every amenable group whose left and right uniformities are equivalent (a SIN group) have the AFP property on complete convex sets? 3.2 - DISTAL ACTIONS Let G be a topological group acting by homeomorphisms on a compact set Q . The flow (G,Q) is called distal if whenever limαsαx=limαsαy for some net sα in G , then x=y . A particular class of distal flows is given by equicontinuous flows, for which the collection of all maps xgx forms an equicontinuous family on the compact space Q . We have the following fixed point theorem: Theorem 3.3 (Hahn ( 1967)). If a compact affine flow (G,Q) is distal, then there is a G -fixed point. An earlier result by (Kakutani 1938) established the same for the class of equicontinuous flows. Question 3.4. Is there any approximate fixed point analogue of the above results for distal or equicontinuous actions by a topological group on a (non-compact) bounded convex set Q ? 3.3 - NON-AFFINE MAPS Historically, Day’s theorem (and before that, the theorem of Markov and Kakutani) was inspired by the classical Brouwer fixed point theorem (Brouwer 1911) and its later more general versions, first for Banach spaces (Schauder 1930) andlater for locally convex linear Hausdorff topological spaces (Tychonoff 1935).(Recently it was extended to topological vector spaces (Cauty 2001)). The Tychonoff fixed point theorem states the following. Let C be a nonvoid compact convex subset of a locally convex space and let f:CC be a continuous map. Then f has a fixed point in C . The map f is not assumed to be affine here. However, for a common fixed point of more than one function, the situation is completely different. Papers (Boyce 1969) and (Huneke 1969) contain independent examples of two commuting maps f,g:[0,1][0,1] without a common fixed point. Hence if a common fixed point theorem were to hold, there should be further restrictions on the nature of transformations beyond amenability, and for Day’s theorem, this restriction is that the transformations are affine. The Tychonoff fixed point theorem is being extended in the context of approximate fixed points. For instance, here is one recent elegant result. Theorem 3.5 (Kalenda ( 2011)). Let X be a Banach space. Then every nonempty, bounded, closed, convex subset CX has the weak AFP property with regard to each continuous map f:CC if and only if X does not contains an isomorphic copy of 1 . We do not know if a similar program can be pursued for topological groups. Question 3.6. Does there exist a non-trivial topological group G which has the approximate fixed point property with regard to every continuous action (not necessarily affine) on a bounded, closed convex subset of a locally convex space? of a Banach space? Question 3.7. The same, for the weak AFP property. Natural candidates are the extremely amenable groups, see e.g. Pestov ( 2006). A topological group is extremely amenable if every continuous action of G on a compact space K has a common fixed point. The action does not have to be affine, and K is not supposed to be convex. This is a very strong nonlinear fixed point property. Some of the most basic examples of extremely amenable Polish groups are: 1. The unitary group U(2) with the strong operator topology (Gromov and Milman 1983). 2. The group Aut() of order-preserving bijections of the rational numbers with the topology induced from S (Pestov 1998). 3. The group Aut(X,μ) of measure preserving transformations of a standard Lebesgue measure space equipped with the weak topology. (Giordano and Pestov 2002). However, at least the group Aut() does not have the AFP property with regard to continuous actions on the Hilbert space. To see this, one can use the same construction as in Proposition 2.8, together with the well-known fact that Aut() contains a closed discrete copy of 𝔽2 . ACKNOWLEDGMENTS Cleon S. Barroso is currently as a visiting researcher scholar at the Texas A&M University. He takes the opportunity to express his gratitude to Prof. Thomas Schlumprecht for his support and friendship. Also, he acknowledges Financial Support form Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) by the Science Without Bordes Program, PDE 232883/2014-9. Brice R. Mbombo was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) postdoctoral grant, processo 12/20084-1. Vladimir G. Pestov is a Special Visiting Researcher of the program Science Without Borders of CAPES, processo 085/2012. REFERENCES r1 AL-GADID Y, MBOMBO B AND PESTOV V. 2011. Sur les espaces test pour la moyennabilité. CR Math Acad Sci Soc R Can 33: 65-77. [ Links ] r2 BABENKO I AND BOGATYI SA. 2011. The amenability of the substitution group of formal power series. Izv Math 75: 239-252. [ Links ] r3 BARROSO CS, KALENDA OFK AND REBOUÇAS MP. 2013. Optimal approximate fixed point result in locally convex spaces. J Math Appl 401: 1-8. [ Links ] r4 BARROSO CS, KALENDA OFK AND LIN PK. 2012. On the approximate fixed point property in abstract spaces. Math Z 271: 1271-1285. [ Links ] r5 BARROSO CS AND LIN PK. 2010. On the weak approximate fixed point property. J Math Anal Appl 365: 171-175. [ Links ] r6 BARROSO CS. 2009. The approximate fixed point property in Hausdorff topological vector spaces and applications. Discrete Cont Dyn Syst 25: 467-479. [ Links ] r7 BOURBAKI N. 1963. Intégration. Hermann, Paris. [ Links ] r8 BOYCE WM. 1969. Commuting functions with no common fixed point. Trans Amer Math Soc 137: 77-92. [ Links ] r9 BROUWER LEJ. 1911. Uber Abbildung von Mannigfaltigkeiten. Math Ann 71: 97-115. [ Links ] r10 CAUTY R. 2001. Solution du problème de point fixe de Schauder. Fund Math 170: 231-246. [ Links ] r11 DAY M. 1961. Fixed-point theorems for compact sets. Illinois J Math 5: 585-590. [ Links ] r12 DOBROWOLSKI T AND MARCISZEWSKI W. 1997. Rays and the fixed point property in noncompact spaces. Tsukuba J Math 21: 97-112. [ Links ] r13 EDELSTEIN M AND KOK-KEONG T. 1994. Fixed point theorems for affine operators on vector spaces. J Math Anal Appl 181: 181-187. [ Links ] r14 FLORET K. 1980. Weakly Compact Sets. Lecture Notes in Math. 801, Springer-Verlag, Berlin? Heidelberg? New York. [ Links ] r15 GIORDANO T AND PESTOV V. 2002. Some extremely amenable groups. CR Acad Sci Paris Sér I 4: 273-278. [ Links ] r15a GRIGORCHUK R AND DE LA HARPE P. IN PRESS. Amenability and Ergodic Properties of Topological groups: From Bogolyubov onwards. [ Links ] r16 GROMOV M AND MILMAN VD. 1983. A topological application of the isoperimetric inequality. Amer J Math 105(4): 843-854. [ Links ] r17 HADŽIĆ O. 1996. Almost fixed point and best approximations theorems in H-spaces. Bull Austral Math Soc 53: 447-454. [ Links ] r18 HAHN F. 1967. A fixed-point theorem. Math Systems Theory 1: 55-57. [ Links ] r19 DE LA HARPE P. 1973. Moyennabilité de quelques groupes topologiques de dimension infinie. CR Acad Sci Paris, Sér A 277: 1037-1040. [ Links ] r20 DE LA HARPE P. 1979. Moyennabilité du groupe unitaire et propriété de Schwartz des algèbres de von Neumann. Lecture Notes in Math 725: 220-227. [ Links ] r21 HAZEWINKEL M AND VAN DE VEL M. 1978. On almost-fixed-point theory. Canad J Math 30: 673-699. [ Links ] r22 HUNEKE JP. 1969. On common fixed points of commuting continuous functions on an interval. Trans Amer Math Soc 139: 371-381. [ Links ] r23 IDZIK A. 1988. Almost fixed point theorems. Proc Amer Math Soc 104: 779-784. [ Links ] r24 KAKUTANI S. 1938. Fixed-point theorems concerning bicompact convex sets. Proc Imperial Acad Japan 14: 27-31. [ Links ] r25 KALENDA OFK. 2011. Spaces not containing `1 have weak approximate fixed point property. J Math Anal Appl 373: 134-137. [ Links ] r26 KLEE VL. 1955. Some topological properties of convex sets. Trans Amer Math Soc 78: 30-45. [ Links ] r27 LIN PK AND STERNFELD Y. 1985. Convex sets with the Lipschitz fixed point property are compact. Proc Amer Math Soc 93: 33-39. [ Links ] r28 MARKOV A. 1936. Quelques théorèmes sur les ensembles abéliens. Dokl Akad Nauk SSSR (NS:) 10: 311-314. [ Links ] r29 NOWAK B. 1979. On the Lipschitzian retraction of the unit ball in infinite dimensional Banach spaces onto its boundary. Bull Acad Polon Sci Sr Sci Math 27: 861-864. [ Links ] r30 PARK S. 1972. Almost fixed points of multimaps having totally bounded ranges. Nonlinear Anal 51: 1-9. [ Links ] r31 PATERSON AT. 1988. Amenability. University Math. Surveys and Monographs 29, Amer Math Soc, Providence, RI. [ Links ] r32 PESTOV V. 2006. Dynamics of Infinite-dimensional Groups: The Ramsey-Dvoretzky-Milman phenomenon. University Lecture Series, vol. 40, Amer Math Soc, Providence, RI. [ Links ] r33 PESTOV V. 1998. On free actions, minimal flows, and a problem by Ellis. Trans of the American Mathematical Society 350: 4149-4165. [ Links ] r34 SCARF H. 1967. The approximation of fixed points of a continuous mapping. SIAM J Applied Math 15: 1328-1343. [ Links ] r35 SCHAUDER J. 1930. Der Fixpunktsatz in Functionalraumen. Studia Math 02: 171-180. [ Links ] r36 TYCHONOFF A. 1935. Ein Fixpunktsatz. Math Ann 03: 767-776. [ Links ] Received: June 25, 2015; Accepted: October 10, 2016 Correspondence to: Brice R. Mbombo E-mail: [email protected] AMS Classification: 47-03, 47H10, 54H25, 37B05.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846293330192566, "perplexity": 825.2675683818098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526931.25/warc/CC-MAIN-20190721061720-20190721083720-00023.warc.gz"}
https://gazzarra.org/2020/05/01/egestion-definition-why-is-it-significant/
# The Egestion Definition can be a term used in mathematics that is what the law states that establishes an precise science from mathematics What causes this concept so interesting is it is an exact science, plus it is impossible to define notion in mathematics. What is the significance of Egestion Definition? Egestion Definition:”Establish the Theory in Biology” is what will be asked here. https://momandmore.com/2020/01/9-reasons-why-you-should-make-nam-du-islands-your-destination.html Egestion Definition: Definition the idea in Biology can be an expression that is used in various types. It may be understood to be”Establish a Law in Biology, then classify the Laws of Mathematics or some other science in accordance with it” This idea of Egestion Definition is applied to say that if one thing is characterized, this thing’s classification has been done and hence it’ll soon be more easy to understand how every thing was created. What is Egestion Definition: the word Egestion Definition can be its particular classification, and really a term that can be employed to define a new theory from the Biology. 1 instance of the is Egestion Definition:”Establish idea in Biology.” Even the Egestion Definition may be implemented to most branches of mathematics fiction. As stated by George Lefeva,”The Egestion Definition isn’t confined into Biology alone. As a subject, the biological sciences have adopted it and use it to categorize distinct scientific thoughts and theories” Egestion Definition in biology’s idea may be that the definition of concepts in Biology. Alternatively the theory will probably undoubtedly be categorized in agreement with the Egestion Definition, although It’s perhaps not that you will get to define theory from Biology. The Egestion Definition states that all concepts are divided into four forms, that are classified according to the scientific procedure. The term Egestion Definition is actually a scientific term used in many parts of biological sciences such as also evolution, metabolic rate, as well as cell biology. Every kind of notion has its own outline and the classification is situated on its bases. Currently, the Egestion Definition includes a exact essential part in Biology. It refers to the cornerstone on which the different branches of Biology are categorized according to their various branches. The Egestion Definition will establish the generalities for the classification. It states by Assessing the sciences based on degrees of concepts, different levels of classification might be made. The Egestion Definition claims that theories have to get classified according to this hypothesis they create. Theories can’t be classified in Biology, however, also the hypotheses can. This is due to the fact that the hypothesis says the gaps in biological facts that are existing. Biology Is Still a branch of Science and It’s based upon the Law of Idea. Any scientist who is trying to classify the phenomena must apply the Egestion Definition to establish which kind of theory is being implemented. They will be led by this into this classification of this thought.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293129205703735, "perplexity": 917.7800598938553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00169.warc.gz"}
http://mathhelpforum.com/calculus/51627-chain-rule.html
# Math Help - The Chain Rule 1. ## The Chain Rule Hey.. I'm having some serious problems with the Chain Rule! It's my 2nd day on it and I'm getting a little frustrated on these extra problems I am doing but can't get right... 1. x^2√(9-x^2) I know this is the product rule but I don't really get it.. Whenever I use the product rule then take the derivative of √(9-x^2) I just get LOST!!! 2. secx^2 I always get the wrong answer everytime and can't figure it out.. h'(x)= secxtanx X 2(secx) = 2sec^2xtanx The answer shows up as 2xsecx^2tanx^2 help!! I need help understanding the chain rule and I got no idea how to do problems like these!! 2. Originally Posted by elpermic Hey.. I'm having some serious problems with the Chain Rule! It's my 2nd day on it and I'm getting a little frustrated on these extra problems I am doing but can't get right... 1. x^2√(9-x^2) I know this is the product rule but I don't really get it.. Whenever I use the product rule then take the derivative of √(9-x^2) I just get LOST!!! If you're new to this, I would approach it this way. We have $f(x)=x^2\sqrt{9-x^2}$ Let $u(x)=x^2$ and $v(x)=\sqrt{9-x^2}$ We see that $u'(x)=2x$ and to find $v'(x)$, apply the chain rule. Note that the chain rule is $\frac{d}{\,dx}\left[f(g(x))\right]=f'(g(x))\cdot g'(x)$. So we see that $v'(x)=\tfrac{1}{2}(9-x^2)^{-\frac{1}{2}}\cdot\frac{d}{\,dx}\left[9-x^2\right]\implies v'(x)=\frac{1}{2\sqrt{9-x^2}}\cdot(-2x)=-\frac{2x}{2\sqrt{9-x^2}}$ $=-\frac{x}{\sqrt{9-x^2}}$ Now substitute $u(x),~v(x),~u'(x),~\text{and}~v'(x)$ into the equation for the product rule: If $f(x)=u(x)v(x)$, then $f'(x)=u(x)v'(x)+v(x)u'(x)$ Can you take it from here? 2. secx^2 I always get the wrong answer everytime and can't figure it out.. h'(x)= secxtanx X 2(secx) = 2sec^2xtanx The answer shows up as 2xsecx^2tanx^2 help!! I need help understanding the chain rule and I got no idea how to do problems like these!! I don't see anything wrong with your answer. Make sure you copied down the problem correctly...otherwise, I'd have to say the book's answer is wrong. --Chris 3. For the first one put $f(x)=x^2\sqrt{9-x^2}=\sqrt{9x^4-x^6},$ hence $f'(x)=\frac{1}{2\sqrt{9x^{4}-x^{6}}}\cdot \left( 9x^{4}-x^{6} \right)'=\frac{9\cdot 4x^{3}-6x^{5}}{2x^{2}\sqrt{9-x^{2}}}=\frac{18x-3x^{3}}{\sqrt{9-x^{2}}}.$ 4. Originally Posted by Krizalid For the first one put $f(x)=x^2\sqrt{9-x^2}=\sqrt{9x^4-x^6},$ hence $f'(x)=\frac{1}{2\sqrt{9x^{4}-x^{6}}}\cdot \left( 9x^{4}-x^{6} \right)'=\frac{9\cdot 4x^{3}-6x^{5}}{2x^{2}\sqrt{9-x^{2}}}=\frac{18x-3x^{3}}{\sqrt{9-x^{2}}}.$ Ha! I was thinking about that...but he was working with product and chain rules...so I did it the way he was doing it. It is always good to see another approach --Chris 5. I was working with your way and I still don't get how to do it... f'(x)= (2x)(√(9-x^2) + (-x/2√(9-x^2))(x^2) = 2x√(9-x^2) - (-x^3/2√(9-x^2)) Did I do something wrong?? 6. Originally Posted by Krizalid For the first one put $f(x)=x^2\sqrt{9-x^2}=\sqrt{9x^4-x^6},$ hence $f'(x)=\frac{1}{2\sqrt{9x^{4}-x^{6}}}\cdot \left( 9x^{4}-x^{6} \right)'=\frac{9\cdot 4x^{3}-6x^{5}}{2x^{2}\sqrt{9-x^{2}}}=\frac{18x-3x^{3}}{\sqrt{9-x^{2}}}.$ That is the answer but how do you get it using the product&chain rule?? I've got no idea what you just did BTW! 7. Originally Posted by elpermic I was working with your way and I still don't get how to do it... f'(x)= (2x)(√(9-x^2) + (-x/2√(9-x^2))(x^2) = 2x√(9-x^2) + (-x^3/2√(9-x^2)) Did I do something wrong?? The bit in red should be plus, and the 2 shouldn't be there. This is why: $f'(x)=\underbrace{(2x)}_{u'(x)}\underbrace{(\sqrt{ 9-x^2})}_{v(x)}+\underbrace{\left(-\frac{x}{\sqrt{9-x^2}}\right)}_{v'(x)}\underbrace{(x^2)}_{u(x)}=2x\ sqrt{9-x^2}+\frac{-x^3}{\sqrt{9-x^2}}$ So we have [as of now] $f'(x)=2x\sqrt{9-x^2}+\frac{-x^3}{\sqrt{9-x^2}}$ To combine the two terms, they need to have a common denominator. In this case, the common denominator is $\sqrt{9-x^2}$. So we see that $2x\sqrt{9-x^2}\cdot{\color{red}\frac{\sqrt{9-x^2}}{\sqrt{9-x^2}}}+\frac{-x^3}{\sqrt{9-x^2}}=\frac{2x(9-x^2)-x^3}{\sqrt{9-x^2}}$ $=\frac{18x-2x^3-x^3}{\sqrt{9-x^2}}=\color{red}\boxed{\frac{18x-3x^3}{\sqrt{9-x^2}}}$ 8. NOW I understand the problem.. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692086577415466, "perplexity": 521.0326254996734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894865.50/warc/CC-MAIN-20140722025814-00003-ip-10-33-131-23.ec2.internal.warc.gz"}
http://clay6.com/qa/8903/the-radius-of-a-circular-disc-is-given-as-24cm-with-a-maximum-error-in-mesu
Browse Questions # The radius of a circular disc is given as $24cm$ with a maximum error in mesurement of $0.02cm$ Use differentials to estimate the maximum error in the calculated area of the dise. This question has multiple parts. Therefore each part has been answered as a separate question on Clay6.com Can you answer this question? Toolbox: • Let $y=f(x)$ be a differentiable function then the quantities $dx$ and $dy$ are called differentials.The differential $dx$ is an independent variable. • The differential $dy$ is then defined by $dy=f'(x)dx(dx \approx \Delta x)$ • Also $f(x+\Delta x)-f(x)=\Delta y =dy$ from which $f(x+\Delta x)$can be evaluated • $\large\frac{\Delta y}{y}=\frac{Actaual\;change\;in \;y}{Actual\;value\;of \;y}$$=relative \;error • \large\frac{\Delta y}{y}$$ \times 100 =percentage\; error$ The area of the disc is $A=\pi r^2$ where x is the radius $\therefore dA=2 \pi x dx$ When $x=24cm, dx \approx \Delta x =0.02$ $\therefore dA =2 \pi \times 24 \times 0.02$ $\qquad=0.96 \pi cm^2$ answered Aug 12, 2013 by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781573414802551, "perplexity": 784.8572049232299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123318.85/warc/CC-MAIN-20170423031203-00014-ip-10-145-167-34.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Denial_of_Universality
# De Morgan's Laws (Predicate Logic)/Denial of Universality (Redirected from Denial of Universality) ## Theorem Let $\forall$ and $\exists$ denote the universal quantifier and existential quantifier respectively. Then: $\neg \forall x: \map P x \dashv \vdash \exists x: \neg \map P x$ If not everything is, there exists something that is not. ## Proof By the tableau method of natural deduction: $\neg \forall x: \map P x \vdash \exists x: \neg \map P x$ Line Pool Formula Rule Depends upon Notes 1 1 $\neg \forall x: \map P x$ Premise (None) 2 2 $\neg \exists x: \neg \map P x$ Assumption (None) 3 3 $\neg \map P {\mathbf a}$ Assumption (None) for an arbitrary $\mathbf a$ 4 3 $\exists x: \neg \map P x$ Existential Generalisation 3 5 2, 3 $\bot$ Principle of Non-Contradiction: $\neg \mathcal E$ 2, 4 6 2 $\map P {\mathbf a}$ Reductio ad Absurdum 3 – 5 Assumption 3 has been discharged 7 1, 2 $\forall x: \map P x$ Universal Generalisation 6 as $\mathbf a$ was arbitrary 8 2 $\bot$ Principle of Non-Contradiction: $\neg \mathcal E$ 1, 7 9 1 $\exists x: \neg \map P x$ Reductio ad Absurdum 2 – 8 Assumption 2 has been discharged $\Box$ By the tableau method of natural deduction: $\exists x: \neg \map P x \vdash \neg \forall x: \map P x$ Line Pool Formula Rule Depends upon Notes 1 1 $\exists x: \neg \map P x$ Premise (None) 2 2 $\forall x: \map P x$ Assumption (None) 3 1 $\neg \map P {\mathbf a}$ Existential Instantiation 1 4 2 $\map P {\mathbf a}$ Universal Instantiation 2 5 1, 2 $\bot$ Principle of Non-Contradiction: $\neg \mathcal E$ 3, 4 6 1 $\neg \forall x: \map P x$ Proof by Contradiction: $\neg \mathcal I$ 2 – 5 Assumption 2 has been discharged $\blacksquare$ ## Law of the Excluded Middle This theorem depends on the Law of the Excluded Middle, by way of Reductio ad Absurdum. This is one of the axioms of logic that was determined by Aristotle, and forms part of the backbone of classical (Aristotelian) logic. However, the intuitionist school rejects the Law of the Excluded Middle as a valid logical axiom. This in turn invalidates this theorem from an intuitionistic perspective. ## Examples ### Example: $\forall x \in S: x \le 3$ Let $S \subseteq \R$ be a subset of the real numbers. Let $P$ be the statement: $\forall x \in S: x \le 3$ The negation of $P$ is the statement written in its simplest form as: $\exists x \in S: x > 3$ ## Source of Name This entry was named for Augustus De Morgan.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377579092979431, "perplexity": 1292.226018477161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00381.warc.gz"}
https://www.physicsforums.com/threads/trajectory-of-a-ball.712331/
# Trajectory of a ball 1. Sep 24, 2013 ### majinsock 1. The problem statement, all variables and given/known data A cannon shoots a ball at an angle $\theta$ above the horizontal ground a) Neglecting air resistence, find the ball's position (x(t) and y(t)) as a function of time. b) Take the above answer and find an equation for the ball's trajectory y(x). 2. Relevant equations $\frac{dx}{dt}$ = Vocos($\theta$) $\frac{dy}{dt}$ = Vosin($\theta$) - gt 3. The attempt at a solution Okay, so I integrated both sides of both equations and ended up with: x(t) = Vocos($\theta$)t y(t) = Vosin($\theta$)t - 0.5gt2 which I'm pretty sure solves part a), but I'm having a lot of trouble finding a function of y in terms of x for part b). I tried messing around with some trigonometric identities but I get nowhere. Any tips? Last edited: Sep 24, 2013 2. Sep 24, 2013 ### Saitama Try to find t in terms of x or y from one of the equations you found for part a. Substitute that in the other equation. Draft saved Draft deleted Similar Discussions: Trajectory of a ball
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287511467933655, "perplexity": 828.0865736467301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824819.92/warc/CC-MAIN-20171021133807-20171021153807-00471.warc.gz"}
http://math.stackexchange.com/questions/263228/su2-representation-of-so3
# $SU(2)$ Representation of $SO(3)$ I've often seen it written that $SU(2)$ is a "two-valued representation" of $SO(3)$ (in theoretical physics books mainly). I have a major conceptual issue with this however. I know there is a Lie group isomorphism $SU(2)/\mathbb{Z}_2=SO(3)$ so we can assign to every matrix $R$ in $SO(3)$ one of two matrices $U$ or $-U$ in $SU(2)$. But surely the definition of a representation forces us to choose $D(I) = I$ so this "two-valued" business breaks down? Could someone explain where I'm getting stuck? Also would anyone be able to point me to a resource which treats all this material rigorously? I have no background in Representation Theory and don't particularly want to read a really long text, but at the same time I am unhappy with the heuristic arguments of most physics books. Is there any lucid text taking some middle way? - It sounds like "two-valued representation" is just a garbled attempt to say "double cover". (Or else physicists use "representation" with a broader meaning than do mathematicians). –  Henning Makholm Dec 21 '12 at 14:04 Which books have you read this in? Knowing that might help to explain what they mean. –  mt_ Dec 21 '12 at 14:09 Costa and Fogli - Introduction to Spacetime and Internal Symmetries, among others. It's also written here –  Edward Hughes Dec 21 '12 at 14:30 I've just done a bit of googling, and perhaps the key is in projective representations? Does this make sense to anyone? –  Edward Hughes Dec 21 '12 at 14:48 In response to your reference request at the end of your question, you might check out math.columbia.edu/~woit/QM/fall-course.pdf. These are detailed lecture notes for an undergraduate math class on quantum mechanics at Columbia which just wrapped up. This might be part of the middle ground you're hoping for. –  Paul Siegel Dec 21 '12 at 15:28 "$SU(2)$ is a two-valued representation of $SO(3)$" is a rather bizarre sentence - it isn't totally clear what is meant by "two-valued" or "representation". One possible meaning, as suggested in the comments, is simply that $SU(2)$ is a double cover of $SO(3)$. But I suspect that something slightly different is meant. $SU(2)$ has a canonical unitary representation $\pi$ on $\mathbb{C}^2$: namely, view an element of $SU(2)$ as a unitary operator on $\mathbb{C}^2$. As is well-known, this does not descend to an ordinary representation of $SO(3)$. One could try to define $\pi' \colon SO(3) \to U(\mathbb{C}^2)$ by $\pi'(g) = \pi(g')$ where $g'$ is a lift of $g$ to $SU(2)$ along the double cover $SU(2) \to SO(3)$, but the problem is there is no way to choose a lift $g'$ of every $g$ in such a way that $\pi'$ becomes a homomorphism. Nevertheless, there are only two choices for a lift of any $g$ and they differ only by a minus sign; thus while it is impossible to make $\pi'$ an actual homomorphism, it is a homomorphism up to sign: $$\pi'(g_1 g_2) = \pm \pi'(g_1)\pi'(g_2)$$ Thus physicists sometimes like to think of the canonical representation of $SU(2)$ (which incidentally is the spin representation) as a "two valued" representation of $SO(3)$. This can be convenient if you only care about things like the position and momentum of a particle - which only depend on the magnitude of the wavefunction - and not about things like phase and spin. Mathematicians have their own way to make sense of this. Recall that the projective general linear group of a vector space $V$ over a field $F$ is defined to be $GL(V)/F^\times$ where $F^\times$ is the multiplicative group of $F - 0$. One defines a projective representation of a group $G$ on $V$ to be a homomorphism $G \to PGL(V)$. If $V$ has a Euclidean / Hermitian structure (in the real / complex case), one can also speak of projective orthogonal / unitary representations; these are homomorphisms into the unitary group of $V$ modulo the unit group of the ground field. With these definitions, the discussion above implies that the canonical representation of $SU(2)$ is in fact a projective orthogonal representation of $SO(3)$ (since the unit group of $\mathbb{R}$ is just $\{\pm 1\}$). So in conclusion, I interpret the sentence "$SU(2)$ is a two-valued representation of $SO(3)$" to mean "the canonical representation of $SU(2)$ is a projective representation of $SO(3)$". - That's exactly what I wanted to know - thank you so much! –  Edward Hughes Dec 21 '12 at 15:24 "since the unit group of R is just {±1}"; not really. But they are the only scalars in the image of $SO(3)$. –  Alex B. Dec 21 '12 at 16:01 Just as the projective unitary group is defined to be $U(n)/U(1)$, the projective orthogonal group is usually defined to be $O(n)/O(1)$, and if I'm not mistaken $O(1) = \{\pm1\}$. –  Paul Siegel Dec 21 '12 at 16:36 @Paul I was only objecting to your saying that the unit group of $\mathbb{R}$ is $\pm 1$. The unit group of $\mathbb{R}$ is $\mathbb{R}^\times=\mathbb{R}\backslash 0$. –  Alex B. Dec 21 '12 at 16:41 Ah, well there is disagreement in terminology here. If you're an algebraist I suppose the "unit group" in a ring is the set of all elements with multiplicative inverses. But to differential geometers the unit group in a normed ring often means the multiplicative group of unit vectors. I think this is the standard terminology in the literature on Clifford algebras, for instance (which is the relevant structure here). In any event, I suppose the ambiguity is worth mentioning. –  Paul Siegel Dec 21 '12 at 16:49 The problem with the physics definition is that it is not the standard definition of a Lie group representation (e.g. as given in Fulton and Harris, Hall's Lie Groups, Procesi's Lie groups, or any other mathematical text on representation theory). What they actually mean is that we have a covering space $\Phi : \textrm{SU}(2) \to \textrm{SO}(3)$ where $\Phi$ is as given as in my answer here. When they say say $\Phi$ is two to one, they mean that the cardinality of the fiber $\Phi^{-1}(x)$ for any $x \in \textrm{SO}(3)$ is equal to 2. This is constant on all of $\textrm{SO}(3)$ because it is connected (one way to see this IIRC is that $\exp : \mathfrak{so}_3\longrightarrow \textrm{SO}(3)$ is surjective). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820585608482361, "perplexity": 188.16114846089374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894865.50/warc/CC-MAIN-20140722025814-00046-ip-10-33-131-23.ec2.internal.warc.gz"}
https://asmedigitalcollection.asme.org/ICNMM/proceedings-abstract/ICNMM2013/55591/V001T01A002/246472
This paper presents experimental results on friction factor of gaseous flow in a PEEK micro-tube with relative surface roughness of 0.04 %. The experiments were performed for nitrogen gas flow through the micro-tube with 514.4 μm in diameter and 50 mm in length. Three pressure taps holes with 5 mm interval were drilled and the local pressures were measured. Friction factor is obtained from the measured pressure differences. The experiments were conducted for turbulent flow region. The friction factor obtained by the present study are compared with those in available literature and also numerical results. The friction factor obtained is slightly higher than the value of Blasius formula. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016962647438049, "perplexity": 860.3496422213376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00163.warc.gz"}
https://discuss.codechef.com/t/hxor-editorial/82105
# HXOR - Editorial Tester: Alexander Morozov Editorialist: Ajit Sharma Kasturi EASY-MEDIUM Bitwise XOR ### PROBLEM: We are given an sequence of size N with entries A_1, A_2,...., A_N . In one operation we can take two integers i and j where 1 \leq i \lt j \leq N and a non-negative integer p and do the following: change A_i to A_i \oplus 2^p , A_j to A_j \oplus 2^p where \oplus denotes the bitwise xor operation. We need to get the lexicographically smallest sequence after applying this operation exactly X times. Here N \geq 2. ### QUICK EXPLANATION: Iterate i from 1 to N-1 , try to minimize the value of A_i . This can be done as follows: For each A_i, traverse from the most significant bit to least signicant bit and do the following: • If the current bit b is set to 1 , then search for the nearest next index j from i where the bit b is set to 1 for A_j . (if we don’t find any such j, then take j=N) . Now take p=b, i and j and perform the operation mentioned in the problem. Also keep track of number of operations performed. If at any stage, the total number of operatons equal to X, we stop the above process and output the updated sequence. If we still have some number of operations Y left to do, we do the following on this current updated sequence: • If N=2 and Y is odd, then perform the operation once on i=N-1, j = N and p=0 and output the sequence. • Else output the current sequence as the answer. ### EXPLANATION: First let’s understand what is the meaning of the operation. We take two integers i and j, where 1 \leq i \lt j \leq N and a non-negative integer p and apply A_i=A_i \oplus 2^p and A_j = A_j \oplus 2^p. This means that we are toggling the p^{th} position bit of A_i and A_j. (if the bit is set, we unset it and vice-versa) . Since we need to get the lexicographically smallest sequence after N operations, we need to first try and minimize A_1, then A_2 , then A_3, ..... then A_{N} and stop whenever we run out of operations. Let’s think about this problem in this manner: Suppose we have minimized A_1, A_2, ... , A_{i-1} to their maximum extents (i.e, made all A_1, A_2, A_3, ..., A_{i-1} = 0) and now we are trying to minimize A_i . (Let i \lt N. We deal with the case of i=N later). What can we do? Now that we are trying to minimize A_i, we try to minimize it from the most significant bit (MSB) to the least significant bit (LSB) since unsetting the bits with high value will reduce the value of A_i to a great extent . Now for the main question, if a bit b while traversing in the order from MSB to LSB is set to 1, we want to make it to 0, so we try to apply the operation for the current i and p=b, but what is the value of j that we should take? • We must try to take j \gt i, since it is always better than taking j<i since in the latter case the value of A_j where j \lt i is already minimized. • Now for the j we are going to take, it’s optimal to take j such that A_j has bit b set to 1 if we have one. (since we can unset the bit b for A_i and A_j) . Also, among all such possible options for j, its better to take such j which is nearer to i since we are aiming for the lexicographically smallest sequence. • If we don’t have for any j \gt i, the bit b of A_j set to 1, the only better option to make the bit b of A_j unset is to take j=N. Thererfore, we try to perform operations in this manner till we run out of operations or reach A_N. If we run out of operations, we immediately output the resultant sequence. Otherwise we must have reached A_N and have some Y \gt 0 operations remaining. Since each number can have atmost \log(10^9) bits, we must have performed atmost (N-1) \cdot \log(10^9) operations till now. We want to preserve this current sequence if possible since the current sequence is the lexicographically smallest sequence. Consider the following cases: • N=2 \ \ and \ \ Y \ is \ odd We can perform the operation (p=0, i=N-1, j=N) Y times so that this operaton has its effect for only one time since performing the same operation twice is equivalent to not performing the operation at all. • N \ge 3 \ \ and \ \ Y = 1 In this case, we can show that we can preserve the lexicographically smallest sequence. The main idea is to perform some extra dummy operation before starting our main approach to the solution. Suppose initially before applying any operation we have an array A of length N. Now suppose we have found three indices 1 \le i \lt j \lt k \le N which have their bits set at position loc . Now before proceeding our main approach, apply the following operations (p =loc, i, k), (p=loc, j, k) . Now observe that instead of performing 1 operation to unset bits at values at two indices i and j as described in the main approach , we have performed 2 operations and thus performed an extra dummy operation . Now suppose we didn’t find any such i, j, k, then we are sure to find out two indices 1 \le i,j \le N where i has the bit set at position loc and j is the smallest index which has the bit unset at position loc . Now before proceeding our main approach, apply the following operation (p=loc, i, j) . This doesn’t effect the result from our main approach and thus we have performed one extra dummy operation to preserve the lexicographically smallest sequence. • Else we can show that in this case also we can preserve the lexicographically smallest sequence. First if Y is even, we can choose any valid operation and perform that same operation Y times, so that there is effectively no change at all with these Y operations (since Y is even). If Y is odd, we can choose any valid operation and perform that same operations Y-3 times. Since Y-3 is even, there will be effectively no change with these operations. Now we need to still perform 3 operations. We can perform the operation in the following way: (p=0, i=N-1, j=N), (p=0,i=N-2, j=N-1) and (p=0,i=N-2, j=N). If we observe carefully, the bit 0 of i=N-2, i=N-1, i=N are toggled 2 times each which is effectively equivalent to not performing the operation at all. ### TIME COMPLEXITY: O(N \cdot \log(10^9)) for each testcase. ### SOLUTION: Editorialist's solution #include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int N, X; cin >> N >> X; vector<int> a(N); vector<vector<int>> bit_locs(31); vector<int> ptr(31, 0); for (int i = 0; i < N; i++) cin >> a[i]; for (int i = 0; i < N; i++) { for (int j = 30; j >= 0; j--) { if ((1 << j) & a[i]) bit_locs[j].push_back(i); } } int operations = 0; for (int i = 0; i < N - 1; i++) { if (operations == X) break; for (int j = 30; j >= 0; j--) { if (operations == X) break; if ((1 << j) & a[i]) // now find nearest bit j which is set if possible { if (ptr[j] + 1 < (int)bit_locs[j].size()) { a[i] ^= (1 << j); a[bit_locs[j][ptr[j] + 1]] ^= (1 << j); ptr[j] += 2; } else { a[i] ^= (1 << j); a[N - 1] ^= (1 << j); } operations++; } } } int Y = X - operations; if (N == 2 && Y % 2 == 1) { a[N - 1] ^= 1; a[N - 2] ^= 1; } for (int i = 0; i < N; i++) cout << a[i] << " "; cout << endl; } } # VIDEO EDITORIAL: Please comment below if you have any questions, alternate solutions, or suggestions. 10 Likes thanks, nice question very good question and very high Plag also CodeChef do something 22 Likes What should be the output of this? i/p: 1 3 2 2 2 3 o/p: 0 1 2 (obtained on accepted solutions) But we can obtain (0,0,3) here Step-1: Choose i=1,j=3,p=1 new array obtained (0,2,1) Step-2: Choose i=2,j=3,p=1 new array obtained (0,0,3) (0,0,3) is lexicographically smaller than (0,1,2) Or we are not allowed to make changes to the initial array and a totally new one is to be obtained? 5 Likes 0 0 3 is right ans…it should not accept 0 1 2 as an ans. 2 Likes i think they should not consider p=0 ,when we reach the An and still have some operations left to do.as mentioned in the explanation • Y=1 If Y=1Y=1, we must perform 1 more operation compulsorily. The best way to do this to keep the resultant sequence lexicographically as small as possible is to take (p=0, i=N-1, j=N)(p=0,i=N−1,j=N) and apply the operation. • N=2 \ \ and \ \ Y \ is \ oddN=2 and Y is odd We can perform the operation (p=0, i=N-1, j=N)(p=0,i=N−1,j=N) YY times so that this operaton has its effect for only one time since performing the same operation twice is equivalent to not performing the operation at all. Can anyone please tell me why my code is giving TLE for one case. The complexity is O(32*N). Code - Here change unordered_map to ordered map Can somebody help me solve this issue, please… my code Hi. Yes, you are right. Thanks for mentioning this. I have updated the editorial. When N \ge 3 and Y = 1, we can preserve the lexicographically smallest sequence and I have written the proof for it. 3 Likes very weak testing such that some wrong solution or runtime error in same cases codes successfully accepted for Input: 1 3 3 2 4 6 The output according to this editorial and my understanding: 0 1 1 But we can achieve 0 0 0 if we do: operation1: (p=1,i=1,j=3): 0 6 6 operation2: (p=2,i=2,j=3): 0 2 2 operation3: (p=1,i=2,j=3): 0 0 0 The answer will be 0 0 0 . Please go through the updated editorial. The case where N \ge 3 and Y=1 must preserve the lexicographically smallest sequence. Also, your operation 1 must be (p=1, i=1, j=2) . During contest, I tried the last step even if n > 2 which caused WA. Now, adding this condition made the solution AC. Anyway, my solution is O(n^2 logn) and is still AC. https://www.codechef.com/viewsolution/40444884 Try using bitwise left shift operators “<<” instead of “**” to calculate 2^p. Bitwise calculations are faster than normal arithmetical calculations. Can anyone suggest me a testcase for which my program is showing WA ? Thanks in adVance to the responder… And remember to SavE watEr… Tried but the problem still gives same TLE… use this print(" ".join(map(str, arr)) Try to segregate your problem into parts. What I have done during the contest and noted that whenever X is greater than N, the minimum sequence will be the answer. Whenever N is greater you need to get the series. The task that you are getting TLE now, has bugged me off for 3 days straight, after which I segregated the tasks and got AC. So, whenever X is greater than N you can just print the minimum series. But how to get it? Well, you just need to XOR every element in the list/array/vector till the last element and store the final value in the last index and making all other values to be zero. Example: 1 6 7 14 1 10 12 6 5 Normally: (14) 1 (10) 12 6 5 (6) 1 2 (12) 6 5<–1 (2) 1 (2) 8 6 5<–2 0 (1) 0 8 6 (5)<–3 0 0 0 (8) 6 (4)<–4 0 0 0 0 (6) (12)<–5 0 0 0 0 (2) (8)<–6 0 0 0 0 0 10<–7 But if you XOR all the elements: ^ is XOR 14^1^10^12^6^5=10 Which is the last value of the list/array/vector and by making the previous elements 0, you will get the minimum sequence. You can take reference from my solution. My Solution 1 Like @joy_321 Write a Condition as follow: if i == n-2 and j == n-1 and arr[i] == 0: then if n == 2 and x%2 == 1: … … .(nested if)arr[i] ^= 1 … … .(nested if)arr[j] ^= 1 … … .break
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862149119377136, "perplexity": 1295.8814375316083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00608.warc.gz"}
https://www.physicsforums.com/threads/explosion-question.77386/
# Explosion question 1. May 30, 2005 ### clucky Hey everyone! I'm a newbie here, how are you all today? Anywho, I was doing my physics homework and I came across this question, and I'm stuck :( Can anyone tell me just how to start it? A shell is shot with an initial velocity of 20 m/s, at an angle of 60 degrees with the horizontal. At the top of the trajectory, the shell explodes into two fragments of equal mass. One fragment, whose speed immediately after the explosion is zero, falls vertically. How far from the gun does the other fragment land, assuming that the terrain is level and that air drag is negligible? Totally stuck! Don't even know where to start :( 2. May 30, 2005 ### inha 2nd edit: ok. Assuming the the explosion means "breaks into two parts without making too much of a fuss" and air resistance is neglected the system's momentum's x-component is conserved and you can solve the x component of velocity of the half that keeps moving in the x-direction. Can you do it from here on? If approached like this the problem is so unrealistic that I'm not sure if I'm giving you good advice. Maybe some of the official homework helpers could confirm this? It feels odd to think about a system that breaks up like this but since you can break up the equations of motions into components the system's momentum's x-component should be conserved. Last edited: May 30, 2005 3. May 30, 2005 ### OlderDan I will assume you can find the position of the shell at explosion. At that time the velocity is horizontal. Momentum will be conserved. Immediately after explosion one half of the shell will be at rest, then it falls. How fast and in what direction will the other half be moving immediately after the explosion? How does the horizontal distance travelled after explosion compare to the horizontal distance travelled before the explosion? 4. May 30, 2005 ### inha wahh! I doubted myself in vain. Similar Discussions: Explosion question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324915170669556, "perplexity": 620.8353003943727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00494.warc.gz"}
https://mathhelpboards.com/threads/group-of-polynomials-with-coefficients-from-z_10.8788/
# Group of polynomials with coefficients from Z_10. #### ZaidAlyafey ##### Well-known member MHB Math Helper Jan 17, 2013 1,667 Contemporary Abstract Algebra by Gallian This is Exercise 14 Chapter 3 Page 69 Question Let $G$ be the group of polynomials under the addition with coefficients from $Z_{10}$. Find the order of $f=7x^2+5x+4$ . Note: this is not the full question, I removed the remaining parts. Attempt For simplicity I represented the coefficients as ordered pair $(a,b,c)$ Then to compute $|f|$ , I add $f$ until I get a positive integer $n$ such that $f^n=(0,0,0)$ $$\displaystyle f^2=(7,5,4)+(7,5,4)=(4,0,8)$$ $$\displaystyle f^3=(4,0,8)+(7,5,4)=(1,5,2)$$ $$\displaystyle f^4=(1,5,2)+(7,5,4)=(8,0,6)$$ $$\displaystyle f^5=(8,0,6)+(7,5,4)=(5,5,0)$$ $$\displaystyle f^6=(5,5,0)+(7,5,4)=(2,0,4)$$ $$\displaystyle f^7=(2,0,4)+(7,5,4)=(9,5,8)$$ That approach seems wrong since that could go anywhere. If we put $f=ax^2+bx+c$ then $f=(a,b,c)$ either circulates or approaches a finite order but what would be the method to prove it is of infinite or finite order ? #### Evgeny.Makarov ##### Well-known member MHB Math Scholar Jan 30, 2012 2,502 $$\displaystyle f^2=(7,5,4)+(7,5,4)=(4,0,8)$$ $$\displaystyle f^3=(4,0,8)+(7,5,4)=(1,5,2)$$ $$\displaystyle f^4=(1,5,2)+(7,5,4)=(8,0,6)$$ $$\displaystyle f^5=(8,0,6)+(7,5,4)=(5,5,0)$$ $$\displaystyle f^6=(5,5,0)+(7,5,4)=(2,0,4)$$ $$\displaystyle f^7=(2,0,4)+(7,5,4)=(9,5,8)$$ Why did you stop??? Three more steps, and you would have reached enlightening! #### mathbalarka ##### Well-known member MHB Math Helper Mar 22, 2013 573 Hint : $\Bbb Z/10 \Bbb Z$ is of order $10$. See, Z, every member of I&S goes senile from time to time (not to mention me), as I suspected before. #### ZaidAlyafey ##### Well-known member MHB Math Helper Jan 17, 2013 1,667 Ok , I must be kidding. It is easy to see that the system of equations $$\displaystyle 7n \equiv 0 (\text{mod} \, 10)$$ $$\displaystyle 5n \equiv 0 (\text{mod} \, 10)$$ $$\displaystyle 4n \equiv 0 (\text{mod} \, 10)$$ $$\displaystyle n=10$$ is an immediate solution . #### mathbalarka ##### Well-known member MHB Math Helper Mar 22, 2013 573 $$\displaystyle n=10$$ is an immediate solution . And smallest. Neither 7, 5 or 4 is 0 modulo 10. #### ZaidAlyafey ##### Well-known member MHB Math Helper Jan 17, 2013 1,667 Hint : $\Bbb Z/10 \Bbb Z$ is of order $10$. See, Z, every member of I&S goes senile from time to time (not to mention me), as I suspected before. I hope that one won't last for a long time! #### mathbalarka ##### Well-known member MHB Math Helper Mar 22, 2013 573 Zaid said: I hope that one won't last for a long time! That forum's cursed. Forget all hope of the light he who enters there!! #### Deveno ##### Well-known member MHB Math Scholar Feb 15, 2012 1,967 Just a few comments: 1. Your observation that additively, $\Bbb Z_{10}[x]$ is isomorphic to $\Bbb Z_{10}^{\infty}$ is actually a very astute one. The set of $n$ degree polynomials is a subgroup, and isomorphic to $\Bbb Z_{10}^n$ (in this case, $n$ = 3). 2. It is customary to write $f^n$ in an abelian group as $nf$. Beginners in abstract algebra always seem to struggle with this largely notational issue. 3. Both of these observations come in handy much later, when one is studying $R$-modules (which act "sort of" like vector spaces, but with ring elements as coordinates, which introduces some peculiarities). Here the ring is $\Bbb Z_{10}$, and the fact that $\Bbb Z_{10}$ is a quotient ring of $\Bbb Z$ allows us to view it as a $\Bbb Z$-module with some interesting $\Bbb Z$-linear dependencies. 4. $\Bbb Z$-modules are just abelian groups, and this is why you are seeing this example at such an early stage. you have just shown that the integer 10 is the smallest positive $n$ for which $nI$ annihilates $(a,b,c)$ with a = 7, b = 5, c = 4. A little reflection shows that it annihilates all of $\Bbb Z_{10}^3$. This should suggest to you that matrices over certain rings (even when the ring is commutative) don't always act as nicely as we want them to. 5. mathblalarka's observation that none of the entries of (7,5,4) are 0 (mod 10) is somewhat incomplete, consider the order of $p(x) = 2 + 4x + 8x^2$ in the same group.... #### ZaidAlyafey ##### Well-known member MHB Math Helper Jan 17, 2013 1,667 I think if we have $$\displaystyle f= \sum_{k=0}^n a_k x^k$$ then if we restrict that $\text{gcd}(a_1,a_2, \cdots , a_n)=1$ we have the order as $10$. If we do $f=(8,4,2)$ then we expect that $|f|=5$ since for the simultaneous equations $$\displaystyle 8n \equiv 0 (\text{mod}\, 10)$$ $$\displaystyle 4n \equiv 0 (\text{mod}\, 10)$$ $$\displaystyle 2n \equiv 0 (\text{mod}\, 10)$$ sine $\text{gcd}(2,4,8)=2$ it suffies to put $n=5$. To check we have $$\displaystyle 2f= (8,4,2)+(8,4,2)=(6,8,4)$$ $$\displaystyle 3f= (6,8,4)+(8,4,2)=(4,2,6)$$ $$\displaystyle 4f= (4,2,6)+(8,4,2)=(2,6,8)$$ $$\displaystyle 5f= (2,6,8)+(8,4,2)=(0,0,0)$$ which is what we expect so generally we have the order of $f$ under $\mathbb{Z}_n$ as $|f|=n/ \text{gcd}(a_1,a_2,\cdots ,a_n)$ ,right ? #### Deveno ##### Well-known member MHB Math Scholar Feb 15, 2012 1,967 Right, you want to ensure the gcd and 10 have no common factors. This is why finite fields (of characteristic $p$) are so nice to work over (and produce vector spaces, instead of mere modules), because we no longer have to worry about that. Prime factorization (in its basic integer form, and in its extensions to more complicated systems) is just *that* useful. Your last statement is "almost correct". To see why I say "almost" consider the order of: $4 + 4x + 8x^2$. It is clear that gcd(4,4,8) = 4, but the order is not 10/4 = 5/2, it is 5. In other words the order is actually: $\dfrac{n}{\text{gcd}(n,\text{gcd}(a_1,a_2,\dots,a_k))}$ for a polynomial in $\Bbb Z_n[x]$ (under addition) of degree $k$ (here it is implicitly assumed the integers $a_1,\dots,a_k$ are already reduced mod $n$). Given that 10 (our example) only has 2 divisors d with 1 < d < 10, it is unlikely that a polynomial chosen at random has order < 10, but this becomes much more of a concern if working over a highly divisible number like 60. See if you can guess the order of $30x + 12x^2$ in $\Bbb Z_{60}[x]$ under addition, and then prove it (your proof should consist of just three calculations). #### Klaas van Aarsen ##### MHB Seeker Staff member Mar 5, 2012 8,866 I think if we have $$\displaystyle f= \sum_{k=0}^n a_k x^k$$ then if we restrict that $\text{gcd}(a_1,a_2, \cdots , a_n)=1$ we have the order as $10$. How about the order of $5x+2$? #### Deveno ##### Well-known member MHB Math Scholar Feb 15, 2012 1,967 How about the order of $5x+2$? Not a very good example, we have: $2(5x + 2) = 4 \neq 0$ $5(5x + 2) = 5x \neq 0$ which shows the order is indeed 10. #### ZaidAlyafey ##### Well-known member MHB Math Helper Jan 17, 2013 1,667 Your last statement is "almost correct". To see why I say "almost" consider the order of: $4 + 4x + 8x^2$. It is clear that gcd(4,4,8) = 4, but the order is not 10/4 = 5/2, it is 5. In other words the order is actually: $\dfrac{n}{\text{gcd}(n,\text{gcd}(a_1,a_2,\dots,a_k))}$ Yup, I was concerned about that ! See if you can guess the order of $30x + 12x^2$ in $\Bbb Z_{60}[x]$ under addition, and then prove it (your proof should consist of just three calculations). The order of $30$ is $2$ and the order of $12$ is $5$ and they will coincide at the first even integer namely $10$. I don't understand what you mean by three calculations ? #### Klaas van Aarsen ##### MHB Seeker Staff member Mar 5, 2012 8,866 I like $\text{lcm}(|a_1|, ..., |a_k|)$. #### Deveno ##### Well-known member MHB Math Scholar Feb 15, 2012 1,967 Yup, I was concerned about that ! The order of $30$ is $2$ and the order of $12$ is $5$ and they will coincide at the first even integer namely $10$. I don't understand what you mean by three calculations ? Yes, it is clear that the order is at most 10. What two other possibilities need to be ruled out?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325214624404907, "perplexity": 813.4757692054221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00199.warc.gz"}
http://mathhelpforum.com/statistics/150459-standard-deviation-print.html
Standard Deviation • July 9th 2010, 04:40 AM rooney Standard Deviation 1)The average temperature in an american state was recorded as 98.7 F with a standard deviation of 0.7 F. If this is a normal model what percentage of the towns in this state were below 97 F? Is this question simply a case of taking the Z-Score as 1.7 and reading off the table which I believe gives the answer of 0.446? 2) Species A of an animal weighs average of 50 pounds and has standard deviation of 3.3. Species B average is 57.5 with standard deviation of 1.7. A zoo has a species A weighing 45 pounds and B at 54 pounds. Explaining fully determine which is more underweight. • July 9th 2010, 04:55 AM mr fantastic Quote: Originally Posted by rooney 1)The average temperature in an american state was recorded as 98.7 F with a standard deviation of 0.7 F. If this is a normal model what percentage of the towns in this state were below 97 F? Is this question simply a case of taking the Z-Score as 1.7 and reading off the table which I believe gives the answer of 0.446? 2) Species A of an animal weighs average of 50 pounds and has standard deviation of 3.3. Species B average is 57.5 with standard deviation of 1.7. A zoo has a species A weighing 45 pounds and B at 54 pounds. Explaining fully determine which is more underweight. $\displaystyle{Z = \frac{X - \mu}{\sigma}}$. You don't seem to have used this formula correctly in (1). Compare the z-score of each species. • July 9th 2010, 08:11 AM CaptainBlack Quote: Originally Posted by rooney 1)The average temperature in an american state was recorded as 98.7 F with a standard deviation of 0.7 F. If this is a normal model what percentage of the towns in this state were below 97 F?. You gota lurve elementary statistics questions where the question setter does not really understand statistics well enough to set questions that make any sense. Suppose the correlation scale for temperature is comparable to the size of a state? I'm not insisting that the beginning student of statistics understand such subtleties, but I do insist that text book authors do. CB • July 12th 2010, 07:55 AM rooney Quote: Originally Posted by mr fantastic $\displaystyle{Z = \frac{X - \mu}{\sigma}}$. You don't seem to have used this formula correctly in (1). Compare the z-score of each species. Made a bit of a schoole boy error.... So for part 1 the equation I require is: (97-98.7) / 0.7 = 2.43 Then read down the Normal Model table which gives a percentile of about 0.0075? Does that sound roughly correct?! And does anyone have any idea on the second question?? • July 12th 2010, 06:54 PM mr fantastic Quote: Originally Posted by rooney Made a bit of a schoole boy error.... So for part 1 the equation I require is: (97-98.7) / 0.7 = 2.43 Then read down the Normal Model table which gives a percentile of about 0.0075? Does that sound roughly correct?! And does anyone have any idea on the second question?? • July 12th 2010, 07:13 PM pickslides Quote: Originally Posted by rooney Then read down the Normal Model table which gives a percentile of about 0.0075? Does that sound roughly correct?! Sounds good Quote: Originally Posted by rooney And does anyone have any idea on the second question?? First find the z-scores and hence the probabilities for each of these scenarios
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000369906425476, "perplexity": 1410.2465244728091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051165777.94/warc/CC-MAIN-20160524005245-00079-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/197479-completely-expand-ln-function-then-find-derivative-print.html
# completely expand the ln function, then find the derivative • April 17th 2012, 10:45 PM rabert1 completely expand the ln function, then find the derivative $ln(\frac{x}{e^x ln(x)})$ • April 17th 2012, 10:59 PM biffboy Re: completely expand the ln function, then find the derivative Expression= lnx-ln(e^xlnx)=lnx-(lne^x+ln(lnx)=lnx-x-ln(lnx) Derivative=1/x-1-1/(xlnx) (The last term came from 1/lnx times 1/x by the chain rule) • April 18th 2012, 06:41 AM HallsofIvy Re: completely expand the ln function, then find the derivative Quote: Originally Posted by rabert1 $ln(\frac{x}{e^x ln(x)})$ Are you saying that you do not know the "laws of logarithms"? It seems strange that you would be taking a course that asked you find derivatives of logarithm functions if you have never taken a course that dealt with logarithms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617296457290649, "perplexity": 1521.8392073973437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997901076.42/warc/CC-MAIN-20140722025821-00186-ip-10-33-131-23.ec2.internal.warc.gz"}
http://aas.org/archives/BAAS/v32n3/dps2000/407.htm
DPS Pasadena Meeting 2000, 23-27 October 2000 Session 60. Dynamics Oral, Chairs: J. Lissauer, A. Dobrovolskis, Friday, 2000/10/27, 3:20-4:20pm, Little Theater (C107) ## [60.04] Tidal Evolution by Elongated Primaries T.A. Hurford, R. Greenberg (LPL, Univ. of Arizona) Classical tidal theory addresses the effect a tidal potential has on a spherical body, and thus has been applicable to most planetary bodies for which tidal amplitudes and torques are of interest. However, tides in highly elongated, irregular bodies are also of interest, especially as some evidently have satellites. The Ida/Dactyl system is one example. It is not obvious that general results from classical tidal theory apply to these elongated bodies. For example, the age of asteroid 243 Ida's satellite Dactyl is < 100~myr according to the conventional formula for the rate of tidal evolution outward from Ida [1], contrary to estimates based on the collisional lifetime of the asteroid itself (~ 1 byr) and the likelihood that Dactyl formed at the same time [2]. We investigate whether this discrepancy may be due to the conventional formula for tidal evolution being based on a spherical primary, whereas Ida is actually highly elongated. A model for Ida consisting of three spheres connected by damped springs is used to estimate what effects the elongation may have on the tidal dissipation. In fact, our model gives torques and energy dissipation similar to that in an equivalent sphere, indicating that the spherical model gives reasonable results even when applied to an elongated or irregular body. Some subtleties of tidal theory for spherical bodies will be discussed as well. [1] Weidenschilling, S.J., P. Paolicchi, V. Zappala, Do Asteroids have Satellites?, in Asteroids II, Edited by Binzel, Gehrels, Matthews, pp. 643-658, 1989 [2] Durda, D.D., The Formation of Asteroid Satellites in Catastrophic Collisions, \emph{Icarus} \textbf{120}, 212-219, 1996 [Previous] | [Session 60] | [Next]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304425120353699, "perplexity": 4621.372787249832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770829.36/warc/CC-MAIN-20141217075250-00040-ip-10-231-17-201.ec2.internal.warc.gz"}